I have a nice project with a .devcontainer config. Since the vscode update to 1.63 I have some trouble with the docker setup. Now I'm using the newest 1.64.0
I just want to build a new container with a clean volume an start in a fresh environment.
What happens is, that a new container is starting and I see some stuff from another container.
Same if I clone a git repo into a container volume.
Why are some containers connected with the same volume for the workspace?
Why do I fall back every time to the same volume?
In the devcontainer.json I set:
"workspaceFolder": "/workspace",
"workspaceMount": "source=remote-workspace,target=/workspace,type=volume",
To build a new devcontainer in a fresh environment you can install devcotainer cli and trigger build manually.
I'm used to mount workspace as bind mount (on windows with wsl2 files) insted of volume mount, i think that the main issue is the volume name: if both project has "source=remote-workspace" the volume will be detected as the same.
With nodejs where I want to keep node_modules folder inside container, I have done a double mount following this official vscode guide for this usecase.
So I have left workspaceMount as a default bindmount than I have added a volue that override a specific folder.
{
"mounts": [
// vvvvvvvvvvv name must be unique
"source=projectname-node_modules,target=${containerWorkspaceFolder}/node_modules,type=volume"
]
}
The results is:
every file / is server by container
but all inside ${containerWorkspaceFolder} is served by bind mount
but all inside ${containerWorkspaceFolder}/node_modules is server by named volumes
Related
I have a devcontainer compose project that requires mongo and a replica server. This requires a few mongosh commands to be run, which I'd like to do in a separate container as a bash script.
My issue is that when using "Clone repository into Container volume" my mounted directory is empty. This works fine when I first check the repo out locally and then build the container from that.
Here is a demo repository that shows the issue: https://github.com/jrj2211/vscode-remote-try-node-mongo-compose
In this project, the compose file mounts the .devcontainer directory. The file I need is at the path: .devcontainer/scripts/mongosetup.sh.
volumes:
- ./scripts:/scripts
This produces the correct result locally but the folder is empty when in a docker volume.
What is the correct path to the folder location in the WSL2 volume? Is there a way to make this work both locally and cloned in a docker volume?
I tried to set an ENV variable from the devcontainer.json that pointed to ${workspaceFolder} but that ended up as an empty string in compose.
This documentation makes me believe this should work this way which is linked to from the 2nd link that talks about "Clone Repository in Container Volume":
https://code.visualstudio.com/remote/advancedcontainers/add-local-file-mount
https://code.visualstudio.com/remote/advancedcontainers/improve-performance
I was able to get this working through the use of #h4l brilliant code. This takes the containerWorkspaceFolder and localWorkspaceFolder and turns them into environment variables available in docker-compose. This has the added benefit of continuing to work both locally or in a container.
https://github.com/h4l/dev-container-docker-compose-volume-or-bind
Hopefully soon those variables become available in container mode directly so additional scripts arn't needed.
I'd like to mount a volume from a windows host to a linux container and have the content of the target folder in the linux container populate the folder in the windows host.
Example:
- host folder: c:\Users\abc\myfolder
- container folder: /data/mydata
The container is built from an image that creates data inside /data/mydata
If I do docker run -v c:\Users\abc\myfolder:/data/mydata image, then c:\Users\abc\myfolder content will override whatever was on /data/mydata inside the container. I would like to achieve the opposite (put the content of /data/mydata from the container in c:\Users\abc\myfolder)
Creating a shared folder and then logging inside the container and copying the content of /data/mydata to the shared folder would expose the content of /data/mydata to the windows host, but it involves a manual copy and is not very efficient.
Thank you.
There is a feature to control read and write permissions
You can specify that a volume should be read-only by appending :ro to the -v switch:
docker run -v /path/in/host:/path/in/container:ro my_image_name
But just works in container folder and by default (with any chance to modify) is read-write on the host
Sync
Maybe you could:
create a folder called /folders/left (c:\Users\abc\myfolder in your case)
create a folder called /folders/right
create a container to populate the /folders/right
docker run -v /folders/right:/path/in/container my_image_name
ensure /folders/right is empty before container startup in order to not override the internal container folder
with this you will have /folders/left (host folder) and /folders/right (changes made by container)
finally try to sync left to right or another configurations with some tool
linux https://unix.stackexchange.com/a/203854/188975
I am using docker bind mount to map the host /dev/serial/ folder generated by Ubuntu (which contains identifying symlinks to serial devices such as /dev/ttyUSB0). The full docker container run command I am using is
docker run -d --restart always --privileged=true -v /dev/serial:/dev/serial DOCKER_IMAGE_NAME
This works fine at first run, however if the serial device is disconnected and reconnected, the symlinks are recreated. This change does not propagate into the docker container and instead the docker container finds an empty /dev/serial folder. I tested manually creating a file on the host and within the docker container in this directory as well, and strangely the change on one was not updated in the other in both cases.
The volume is shown as
{
"Type": "bind",
"Source": "/dev/serial",
"Destination": "/dev/serial",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
EDIT: Ubuntu creates the symlinks within two directories, by-path and by-id underneath the /dev/serial folder.
Bind mounts are based on inodes and when the file is deleted and recreated then the bind-mount is broken. These changes aren't propagated to the bind-mount until a container restart so it picks the new inode.
A solution for this case (files are deleted and recreated) is to mount the parent directory instead, so in your case you can mount using -v /dev:/dev. Of course this will expose /dev to the container so handle it with care.
The issue exists with Previous versions of Docker. The reason for the same has been clearly explained by Sebastian Van
This looks like a race condition; the problem here is that, when using the -v : option, docker will automatically create if the path does not exist. This functionality was marked for deprecation at some point (exactly because of these issues), but it was considered too much of a breaking change to change (see moby/moby#21666, and issues linked from that issue).
Docker for Windows (when running Linux containers) runs the docker daemon in a VM. Files/directories shared from your local (Windows) machine are shared with the VM using a shared mount, and I suspect that mount is not yet present when the container is started.
Because of that, the %USERPROFILE%/docker_shared directory is not found inside the VM, so the daemon creates that path (as an empty directory). Later, when the shared mount (from the Windows host) is present, it's mounted "on top" of the %USERPROFILE% directory inside the VM, but at that point, the container is already started.
Restarting (docker stop, docker start) resolves the situation because at that point, the shared mount is available, and the container will mount the correct path.
follow the Thread: https://github.com/docker/for-win/issues/1192 for better understanding. The issue was resolved in the Docker version 2.0.4.0 (Edge Channel) and later in the stable release.
So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.
I am learning Docker and I have doubts about when and where to use ADD and VOLUME. Here is what I think both of these do:
ADD
Copy files to the image at build time. The image has all the files so you can deploy very easily. On the other hand, needing to build every time doesn't look like a good idea in development because building requires the developer to run a command to rebuild the container; additionally, building the container can be time-consuming.
VOLUME
I understand that using docker run -v you can mount a host folder inside your container, this way you can easily modify files and watch the app in your container react to the changes. It looks great in development, but I am not sure how to deploy my files this way.
ADD
The fundamental difference between these two is that ADD makes whatever you're adding, be it a folder or just a file actually part of your image. Anyone who uses the image you've built afterwards will have access to whatever you ADD. This is true even if you afterwards remove it because Docker works in layers and the ADD layer will still exist as part of the image. To be clear, you only ADD something at build time and cannot ever ADD at run-time.
A few examples of cases where you'd want to use ADD:
You have some requirements in a requirements.txt file that you want to reference and install in your Dockerfile. You can then do: ADD ./requirements.txt /requirements.txt followed by RUN pip install -r /requirements.txt
You want to use your app code as context in your Dockerfile, for example, if you want to set your app directory as the working dir in your image and to have the default command in a container run from your image actually run your app, you can do:
ADD ./ /usr/local/git/my_app
WORKDIR /usr/local/git/my_app
CMD python ./main.py
VOLUME
Volume, on the other hand, just lets a container run from your image have access to some path on whatever local machine the container is being run on. You cannot use files from your VOLUME directory in your Dockerfile. Anything in your volume directory will not be accessible at build-time but will be accessible at run-time.
A few examples of cases where you'd want to use VOLUME:
The app being run in your container makes logs in /var/log/my_app. You want those logs to be accessible on the host machine and not to be deleted when the container is removed. You can do this by creating a mount point at /var/log/my_app by adding VOLUME /var/log/my_app to your Dockerfile and then running your container with docker run -v /host/log/dir/my_app:/var/log/my_app some_repo/some_image:some_tag
You have some local settings files you want the app in the container to have access to. Perhaps those settings files are different on your local machine vs dev vs production. Especially so if those settings files are secret, in which case you definitely do not want them in your image. A good strategy in that case is to add VOLUME /etc/settings/my_app_settings to your Dockerfile, run your container with docker run -v /host/settings/dir:/etc/settings/my_app_settings some_repo/some_image:some_tag, and make sure the /host/settings/dir exists in all environments you expect your app to be run.
The VOLUME instruction creates a data volume in your Docker container at runtime. The directory provided as an argument to VOLUME is a directory that bypasses the Union File System, and is primarily used for persistent and shared data.
If you run docker inspect <your-container>, you will see under the Mounts section there is a Source which represents the directory location on the host, and a Destination which represents the mounted directory location in the container. For example,
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Here are 3 use cases for docker run -v:
docker run -v /data: This is analogous to specifying the VOLUME instruction in your Dockerfile.
docker run -v $host_path:$container_path: This allows you to mount $host_path from your host to $container_path in your container during runtime. In development, this is useful for sharing source code on your host with the container. In production, this can be used to mount things like the host's DNS information (found in /etc/resolv.conf) or secrets into the container. Conversely, you can also use this technique to write the container's logs into specific folders on the host. Both $host_path and $container_path must be absolute paths.
docker run -v my_volume:$container_path: This creates a data volume in your container at $container_path and names it my_volume. It is essentially the same as creating and naming a volume using docker volume create my_volume. Naming a volume like this is useful for a container data volume and a shared-storage volume using a multi-host storage driver like Flocker.
Notice that the approach of mounting a host folder as a data volume is not available in Dockerfile. To quote the docker documentation,
Note: This is not available from a Dockerfile due to the portability and sharing purpose of it. As the host directory is, by its nature, host-dependent, a host directory specified in a Dockerfile probably wouldn't work on all hosts.
Now if you want to copy your files to containers in non-development environments, you can use the ADD or COPY instructions in your Dockerfile. These are what I usually use for non-development deployment.