I am using docker bind mount to map the host /dev/serial/ folder generated by Ubuntu (which contains identifying symlinks to serial devices such as /dev/ttyUSB0). The full docker container run command I am using is
docker run -d --restart always --privileged=true -v /dev/serial:/dev/serial DOCKER_IMAGE_NAME
This works fine at first run, however if the serial device is disconnected and reconnected, the symlinks are recreated. This change does not propagate into the docker container and instead the docker container finds an empty /dev/serial folder. I tested manually creating a file on the host and within the docker container in this directory as well, and strangely the change on one was not updated in the other in both cases.
The volume is shown as
{
"Type": "bind",
"Source": "/dev/serial",
"Destination": "/dev/serial",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
EDIT: Ubuntu creates the symlinks within two directories, by-path and by-id underneath the /dev/serial folder.
Bind mounts are based on inodes and when the file is deleted and recreated then the bind-mount is broken. These changes aren't propagated to the bind-mount until a container restart so it picks the new inode.
A solution for this case (files are deleted and recreated) is to mount the parent directory instead, so in your case you can mount using -v /dev:/dev. Of course this will expose /dev to the container so handle it with care.
The issue exists with Previous versions of Docker. The reason for the same has been clearly explained by Sebastian Van
This looks like a race condition; the problem here is that, when using the -v : option, docker will automatically create if the path does not exist. This functionality was marked for deprecation at some point (exactly because of these issues), but it was considered too much of a breaking change to change (see moby/moby#21666, and issues linked from that issue).
Docker for Windows (when running Linux containers) runs the docker daemon in a VM. Files/directories shared from your local (Windows) machine are shared with the VM using a shared mount, and I suspect that mount is not yet present when the container is started.
Because of that, the %USERPROFILE%/docker_shared directory is not found inside the VM, so the daemon creates that path (as an empty directory). Later, when the shared mount (from the Windows host) is present, it's mounted "on top" of the %USERPROFILE% directory inside the VM, but at that point, the container is already started.
Restarting (docker stop, docker start) resolves the situation because at that point, the shared mount is available, and the container will mount the correct path.
follow the Thread: https://github.com/docker/for-win/issues/1192 for better understanding. The issue was resolved in the Docker version 2.0.4.0 (Edge Channel) and later in the stable release.
Related
I have a nice project with a .devcontainer config. Since the vscode update to 1.63 I have some trouble with the docker setup. Now I'm using the newest 1.64.0
I just want to build a new container with a clean volume an start in a fresh environment.
What happens is, that a new container is starting and I see some stuff from another container.
Same if I clone a git repo into a container volume.
Why are some containers connected with the same volume for the workspace?
Why do I fall back every time to the same volume?
In the devcontainer.json I set:
"workspaceFolder": "/workspace",
"workspaceMount": "source=remote-workspace,target=/workspace,type=volume",
To build a new devcontainer in a fresh environment you can install devcotainer cli and trigger build manually.
I'm used to mount workspace as bind mount (on windows with wsl2 files) insted of volume mount, i think that the main issue is the volume name: if both project has "source=remote-workspace" the volume will be detected as the same.
With nodejs where I want to keep node_modules folder inside container, I have done a double mount following this official vscode guide for this usecase.
So I have left workspaceMount as a default bindmount than I have added a volue that override a specific folder.
{
"mounts": [
// vvvvvvvvvvv name must be unique
"source=projectname-node_modules,target=${containerWorkspaceFolder}/node_modules,type=volume"
]
}
The results is:
every file / is server by container
but all inside ${containerWorkspaceFolder} is served by bind mount
but all inside ${containerWorkspaceFolder}/node_modules is server by named volumes
So I have this remote folder /mnt/shared mounted with fuse. It is mostly available, except there shall be some disconnections from time to time.
The actual mounted folder /mnt/shared becomes available again when the re-connection happens.
The issue is that I put this folder into a docker volume to make it available to my app: /shared. When I start the container, the volume is available.
But if a disconnection happens in between, while the /mnt/shared repo on the host machine is available, the /shared folder is not accessible from the container, and I get:
user#machine:~$ docker exec -it e313ec554814 bash
root#e313ec554814:/app# ls /shared
ls: cannot access '/shared': Transport endpoint is not connected
In order to get it to work again, the only solution I found is to docker restart e313ec554814, which brings downtime to my app, hence is not an acceptable solution.
So my questions are:
Is this somehow a docker "bug" not to reconnect to the mounted folder when it is available again?
Can I execute this task manually, without having to restart the whole container?
Thanks
I would try the following solution.
If you mount the volume to your docker like so:
docker run -v /mnt/shared:/shared my-image
I would create an intermediate directory /mnt/base/shared and mount it to docker like so:
docker run -v /mnt/base/shared:/base/shared my-image
and I will also adjust my code to refer to the new path or creating a link from /base/shared to /shared inside the container
Explanation:
The problem is that the mounted directory /mnt/shared is probably deleted on host machine, when there is a disconnection and a new directory is created after connection is back. But, the container started running with directory mapping for the old directory which was deleted. By creating an intermediate directory and mapping to it instead you avoid this mapping issue.
Another solution that might work is to mount the directory using bind-propagation=shared
e.g:
--mount type=bind,source=/mnt/shared,target=/shared,bind-propagation=shared
See docker docs explaining bind-propogation
I have a running container with a volume mounted to a local host directory:
"Mounts": [
{
"Source": "/var/lib/postgresql-9.5-docker",
"Destination": "/var/lib/postgresql/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
If I want to provide data from the host to the container (e.g., a big postgres dump), is it safe to directly write the file from the host to the host directory
/var/lib/postgresql-9.5-docker/foo/
?
A quick test shows that this is working (i.e., the file is there if I exec bash into the container and check), but is it safe for data consistency?
Note:
I know that one could use also
docker cp /path/to/src <containerid>:/path/to/dest
But in my specific case this doesn't work when the volume is mounted from Ceph (rbd).
Copying files to a host directory will be as consistent via docker as you expect the host file system to be. There is a very thin layer between container and host.
Using docker volumes via the default local driver will also provide similar access as these volumes also use the local host filesystem.
Copying files into a docker containers file system will depend on the storage driver you run docker with. By default this is AUFS (soon to become OverlayFS) so there is an additional layer there over the standard file system. I wouldn't expect that to be less consistent but due to the extra layer there is more chance for an issue or bugs and it won't perform as well as your local file system either.
Access from both host and container
One feature you get from containers is shared information between the host and container. Everything that you do in the container is actually occurring in the hosts kernel. So if you write lock a file, the host can see that. If you have a file mmaped then it will share the hosts global mmap space.
Accessing or writing to the same file system from both container and host is fine. You won't have differences or delays between the two.
Multiple processes writing to the same file or file location will have the same constraints as any multi process system. The processes would need to use file locking or a mutex otherwise writes could be interleaved.
I know you can share a directory that exists on the host with a container using the VOLUMES directive, but I'd like to do the opposite--share the files that exist in the container with the host, provided that nothing exists in that directory on the host.
The use case for this is that we have contractors, and we'd like to provide them with a docker container that contains all the code and infrastructure they need to work with, but that would allow them to alter that code by running PyCharm or some other program on their local machine without having to worry about committing the container, ...etc.
In order to be able to save (persist) data, Docker came up with the concept of volumes. basically Volumes are directories (or files) that are outside of the default Union File System and exist as normal directories and files on the host filesystem.
You can declare a volume at run-time with the -v flag:
$ docker run -it --name container1 -v /data debian /bin/bash
This will make the directory /data inside the container live outside the Union File System and directly accessible on the host. Any files that the image held inside the /data directory will be copied into the volume. We can find out where the volume lives on the host by using the docker inspect command on the host.
Open a new terminal and leave the previous container running and run:
$ docker inspect container1
The output will provide details on the container configurations including the volumes.
The output should look something similar to the following:
...
Mounts": [
{
"Name": "fac362...80535",
"Source": "//var/lib/docker/vfs/dir/cde167197ccc3e/_data",
"Destination": "/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
...
Telling us that Docker has mounted /data inside the container as a directory somewhere under /var/lib/docker in the host.
In your Dockerfile you can easily ADD or COPY whole directory.
I guess this is what you are looking for. Otherwise you can just share the code and at the docker run so they can volume-mount the shared code.
I am learning Docker and I have doubts about when and where to use ADD and VOLUME. Here is what I think both of these do:
ADD
Copy files to the image at build time. The image has all the files so you can deploy very easily. On the other hand, needing to build every time doesn't look like a good idea in development because building requires the developer to run a command to rebuild the container; additionally, building the container can be time-consuming.
VOLUME
I understand that using docker run -v you can mount a host folder inside your container, this way you can easily modify files and watch the app in your container react to the changes. It looks great in development, but I am not sure how to deploy my files this way.
ADD
The fundamental difference between these two is that ADD makes whatever you're adding, be it a folder or just a file actually part of your image. Anyone who uses the image you've built afterwards will have access to whatever you ADD. This is true even if you afterwards remove it because Docker works in layers and the ADD layer will still exist as part of the image. To be clear, you only ADD something at build time and cannot ever ADD at run-time.
A few examples of cases where you'd want to use ADD:
You have some requirements in a requirements.txt file that you want to reference and install in your Dockerfile. You can then do: ADD ./requirements.txt /requirements.txt followed by RUN pip install -r /requirements.txt
You want to use your app code as context in your Dockerfile, for example, if you want to set your app directory as the working dir in your image and to have the default command in a container run from your image actually run your app, you can do:
ADD ./ /usr/local/git/my_app
WORKDIR /usr/local/git/my_app
CMD python ./main.py
VOLUME
Volume, on the other hand, just lets a container run from your image have access to some path on whatever local machine the container is being run on. You cannot use files from your VOLUME directory in your Dockerfile. Anything in your volume directory will not be accessible at build-time but will be accessible at run-time.
A few examples of cases where you'd want to use VOLUME:
The app being run in your container makes logs in /var/log/my_app. You want those logs to be accessible on the host machine and not to be deleted when the container is removed. You can do this by creating a mount point at /var/log/my_app by adding VOLUME /var/log/my_app to your Dockerfile and then running your container with docker run -v /host/log/dir/my_app:/var/log/my_app some_repo/some_image:some_tag
You have some local settings files you want the app in the container to have access to. Perhaps those settings files are different on your local machine vs dev vs production. Especially so if those settings files are secret, in which case you definitely do not want them in your image. A good strategy in that case is to add VOLUME /etc/settings/my_app_settings to your Dockerfile, run your container with docker run -v /host/settings/dir:/etc/settings/my_app_settings some_repo/some_image:some_tag, and make sure the /host/settings/dir exists in all environments you expect your app to be run.
The VOLUME instruction creates a data volume in your Docker container at runtime. The directory provided as an argument to VOLUME is a directory that bypasses the Union File System, and is primarily used for persistent and shared data.
If you run docker inspect <your-container>, you will see under the Mounts section there is a Source which represents the directory location on the host, and a Destination which represents the mounted directory location in the container. For example,
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Here are 3 use cases for docker run -v:
docker run -v /data: This is analogous to specifying the VOLUME instruction in your Dockerfile.
docker run -v $host_path:$container_path: This allows you to mount $host_path from your host to $container_path in your container during runtime. In development, this is useful for sharing source code on your host with the container. In production, this can be used to mount things like the host's DNS information (found in /etc/resolv.conf) or secrets into the container. Conversely, you can also use this technique to write the container's logs into specific folders on the host. Both $host_path and $container_path must be absolute paths.
docker run -v my_volume:$container_path: This creates a data volume in your container at $container_path and names it my_volume. It is essentially the same as creating and naming a volume using docker volume create my_volume. Naming a volume like this is useful for a container data volume and a shared-storage volume using a multi-host storage driver like Flocker.
Notice that the approach of mounting a host folder as a data volume is not available in Dockerfile. To quote the docker documentation,
Note: This is not available from a Dockerfile due to the portability and sharing purpose of it. As the host directory is, by its nature, host-dependent, a host directory specified in a Dockerfile probably wouldn't work on all hosts.
Now if you want to copy your files to containers in non-development environments, you can use the ADD or COPY instructions in your Dockerfile. These are what I usually use for non-development deployment.