I'm running Docker Desktop on Windows 10. I used the repository for the Fonduer Tutorials to create an image to run with docker. This works fine so far and I am able to run the notebooks which are included in the repository.
I now would like to copy some jupyter notebooks and other data from the host to the container called fonduer-tutorials-jupyter-1 to be able to make use of the fonduer framework.
I am able to copy the files to the container and also to open the jupyter notebooks, but they unfortunately do open in read-only mode.
How can I copy files from host to container and still have permission to write on a windows machine?
I read a lot about options like chown and other flags to use with COPY, but it seems like they're not available on windows machines.
Let's assume my UID received with id -u is 1000 and my GID received with id -g is 2000 if that is relevant to a solution.
To prevent copying the files manually and avoid the linked access restrictions described above a better solution is to map the host directory to a volume within the container via .yml-File, in this case the docker-compose.yml. To do so, the following needs to be added to the .yml-File.
volumes:
- [path to host directory]:[container path where the files should be placed in]
With this the files will be available both, within the container as well as on host.
Related
I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?
I am finding many answers on how to develop inside a container in Visual Studio Code with the Remote Containers extension, but surprisingly none address my use case.
I can add to the repo only from the host, but can run the code only from the container. If I edit a file in the container, I have to manually copy it to the host so I can commit it, but if I edit the file on the host, I have to manually copy it into the container so I can test it.
I would like to set up the IDE to automatically copy files from the host into the container whenever I save or change a file in a particular directory. This way I can both commit files on the host, and run them in the container, without having to manually run docker cp each time I change a file. Files should not automatically be copied from the container to the host, since the container will contain built files which should not be added to the repo.
It seems highly unlikely that this is impossible; but how?
This can be configured using the Run on Save extension.
Set it to run docker cp on save.
I have a container that runs a Python script in order to download a few big files from Amazon S3. The purpose of this container is just to download the files so I have them on my host machine. Because these files are needed by my app (which is running in a separate container with a different image), I bind mount from my host to the app's container the directory downloaded from the first container.
Note 1: I don't want to run the script directly from my host as it has various dependencies that I don't want to install on my host machine.
Note 2: I don't want to download the files while the app's image is being built as it takes too much time to rebuild the image when needed. I want to pass these files from outside and update them when needed.
Is there a way to make the first container to download those files directly to my host machine without downloading them first in the container and then copying them to the host as they take 2x the space needed before cleaning up the container?
Currently, the process is the following:
Build the temporary container image and run it in order to download
the models
Copy the files from the container to the host
Cleanup unneeded container and image
Note: If there is a way to download the files from the first container directly to the second and override them if they exist, it may work too.
Thanks!
You would use a host volume for this. E.g.
docker run -v "$(pwd)/download:/data" your_image
Would run your_image and anything written to /data inside the container would actually write to the host in the ./download directory.
Context:
I have a Java Spring Boot Application which has been deployed to run on a Docker Container. I am using Docker Toolbox to be precise.
The application exposes a few REST API's to upload and download files. The application works fine on Docker i.e. i'm able to upload and download files using API.
Questions:
In the application I have hard coded the path as something like "C:\SomeFolder". What location is this stored on the Docker container?
How do I force the application when running on Docker to use the Host file system instead of Docker's File system?
This is all done by Docker Volumes.
Read more about that in the Docker documentation:
https://docs.docker.com/storage/volumes/
In the application I have hard coded the path as something like "C:\SomeFolder". What location is this stored on the Docker container?
c:\SomeFolder, assuming you have a Windows container. This is the sort of parameter you'd generally set via a command-line option or environment variable, though.
How do I force the application when running on Docker to use the Host file system instead of Docker's File system?
Use the docker run -v option or an equivalent option to mount some directory from the host on that location. Whatever the contents of that directory are on the host will replace what's in the container at startup time, and after that changes in the host should be reflected in the container and vice versa.
If you have an opportunity to rethink this design, there are a number of lurking issues around file ownership and the like. The easiest way to circumvent these issues are to store data somewhere like a database (which may or may not itself be running in Docker) and use network I/O to send data to and from the container, and store as little as possible in the container filesystem. docker run -v is an excellent way to inject configuration files and get log files out in a typical server-oriented use.
I am using docker for software development, as I can bundle all my dependencies (compilers, libraries, ...) within a nice contained environment, without polluting the host.
The way I usually do things (which I guess is pretty common): I have a directory on the host that only contains the source code, which is mounted into a development container using a docker volume, where my software gets built and executed. Thanks to volumes being in sync, any changes in the source is reflected within the container.
Here is the pitfall: when using a code editor, software dependencies are considered broken as they are not accessible from the host. Therefore linting, etc... does not work.
I would like to be able to mount, let's say /usr/local/include from the container onto the host so that, be correctly configuring my editor, I can fix all the warnings.
I guess docker volume is not the solution here, because it would override the contained file system...
Also, I'm using Windows (no choice here) therefore my flow is:
Windows > Samba > Linux Host > Docker > Container
and I'd prefer not switching IDE (VS Code).
Any ideas? Thank you!
You basically wish you could reverse mount a volume from the container to the host. This is unfortunately not possible with Docker, and there are variants of this question here: How to mount a directory in docker container to host
You're stuck with copying the files from the container to the host. As far as the host path matching /usr/local/include or having to use a different folder depends upon your setup.
The easiest solution which would not require changing the docker image would be to use docker cp to copy the files.
Otherwise, you could automate this by having the image on entry (after installing all dependencies) copy the files to /tmp/include and mount a host volume to that location.
I use https://forums.docker.com/t/how-to-mount-docker-volume-along-with-subfolders-on-the-host/120482/13 to expose python libraries from inside the container to a folder locally so that neovim can read the libraries for autocomplete/jump to definitions.