How synchronize the container folder to local empty folder when container running - docker

I would like run a container with links my container folder to my local folder. I can create a link the local folder to container folder, if the container folder isn't empty, he's overwritten.
But I would like a reverse link : if I run a image with docker run the folder in container writes in my local folder his data, and if I modify the files in local, the changments are writte in container.
I ask that because I download a framework online in my Dockerfile, and if I start a new project with this image, I would like the framework are directly download from container without having to download manually before.
It's possible ?
Thanks for your responses.

you can use volumes that will be mapped to your container:
docker run -it -v <my_local_folder>:<my_container_folder>

Related

VSCode Remote Containers automatically copy file from host to docker container on save

I am finding many answers on how to develop inside a container in Visual Studio Code with the Remote Containers extension, but surprisingly none address my use case.
I can add to the repo only from the host, but can run the code only from the container. If I edit a file in the container, I have to manually copy it to the host so I can commit it, but if I edit the file on the host, I have to manually copy it into the container so I can test it.
I would like to set up the IDE to automatically copy files from the host into the container whenever I save or change a file in a particular directory. This way I can both commit files on the host, and run them in the container, without having to manually run docker cp each time I change a file. Files should not automatically be copied from the container to the host, since the container will contain built files which should not be added to the repo.
It seems highly unlikely that this is impossible; but how?
This can be configured using the Run on Save extension.
Set it to run docker cp on save.

Docker mount a volume and populate it from container

I'd like to mount a volume from a windows host to a linux container and have the content of the target folder in the linux container populate the folder in the windows host.
Example:
- host folder: c:\Users\abc\myfolder
- container folder: /data/mydata
The container is built from an image that creates data inside /data/mydata
If I do docker run -v c:\Users\abc\myfolder:/data/mydata image, then c:\Users\abc\myfolder content will override whatever was on /data/mydata inside the container. I would like to achieve the opposite (put the content of /data/mydata from the container in c:\Users\abc\myfolder)
Creating a shared folder and then logging inside the container and copying the content of /data/mydata to the shared folder would expose the content of /data/mydata to the windows host, but it involves a manual copy and is not very efficient.
Thank you.
There is a feature to control read and write permissions
You can specify that a volume should be read-only by appending :ro to the -v switch:
docker run -v /path/in/host:/path/in/container:ro my_image_name
But just works in container folder and by default (with any chance to modify) is read-write on the host
Sync
Maybe you could:
create a folder called /folders/left (c:\Users\abc\myfolder in your case)
create a folder called /folders/right
create a container to populate the /folders/right
docker run -v /folders/right:/path/in/container my_image_name
ensure /folders/right is empty before container startup in order to not override the internal container folder
with this you will have /folders/left (host folder) and /folders/right (changes made by container)
finally try to sync left to right or another configurations with some tool
linux https://unix.stackexchange.com/a/203854/188975

Is it possible to save file from docker container to host directly

I have a container that runs a Python script in order to download a few big files from Amazon S3. The purpose of this container is just to download the files so I have them on my host machine. Because these files are needed by my app (which is running in a separate container with a different image), I bind mount from my host to the app's container the directory downloaded from the first container.
Note 1: I don't want to run the script directly from my host as it has various dependencies that I don't want to install on my host machine.
Note 2: I don't want to download the files while the app's image is being built as it takes too much time to rebuild the image when needed. I want to pass these files from outside and update them when needed.
Is there a way to make the first container to download those files directly to my host machine without downloading them first in the container and then copying them to the host as they take 2x the space needed before cleaning up the container?
Currently, the process is the following:
Build the temporary container image and run it in order to download
the models
Copy the files from the container to the host
Cleanup unneeded container and image
Note: If there is a way to download the files from the first container directly to the second and override them if they exist, it may work too.
Thanks!
You would use a host volume for this. E.g.
docker run -v "$(pwd)/download:/data" your_image
Would run your_image and anything written to /data inside the container would actually write to the host in the ./download directory.

Docker mount volume to reflect container files in host

The use case is that I want to download and image that contains python code files. Assume the image does not have any text editor installed. So i want to mount a drive on host, so that files in the container show up in this host mount and i can use different editors installed on my host to update the code. Saving the changes are to be reflected in the image.
if i run the following >
docker run -v /host/empty/dir:/container/folder/with/code/files -it myimage
the /host/empty/dir is still empty, and browsing the container dir also shows it as empty. What I want is the file contents of /container/folder/with/code/files to show up in /host/empty/dir
Sébastien Helbert answer is correct. But there is still a way to do this in 2 steps.
First run the container to extract the files:
docker run --rm -it myimage
In another terminal, type this command to copy what you want from the container.
docker cp <container_id>:/container/folder/with/code/files /host/empty/dir
Now stop the container. It will be deleted (--rm) when stopped.
Now if you run your original command, it will work as expected.
docker run -v /host/empty/dir:/container/folder/with/code/files -it myimage
There is another way to access the files from within the container without copying it but it's very cumbersome.
Your /host/empty/dir is always empty because the volume binding replaces (overrides) the container folder with your empty host folder. But you can not do the opposite, that is, you take a container folder to replace your host folder.
However, there is a workaround by manually copying the files from your container folder to your host folder. before using them as you have suggested.
For exemple :
run your docker image with a volume maaping between you host folder and a temp folder : docker run -v /host/empty/dir:/some-temp-folder -it myimage
copy your /container/folder/with/code/files content into /some-temp-folder to fill you host folder with you container folder
run you container with a volum mapping on /host/empty/dir but now this folder is no longer empty : run -v /host/empty/dir:/container/folder/with/code/files -it myimage
Note that steps 1 & 2 may be replaced by : Copying files from Docker container to host

Make directory available locally in Docker

I have a directory in my Docker container, and I'm trying to make it available locally using -v screenshots:/srv/screenshots in my docker run command but it's not available.
Do I need to add something else to my command?
Host volumes are mapped from the host into the container, not the other way around. This is one way to have persistent storage (so the data don't disappear when the container is re-created).
You can copy the screenshot folder to your host with docker cp and map them in.
You will have your screenshots in the local screenshots folder. Mapping them in with -v screenshots:/srv/screenshots makes them appear in /srv/screenshots in the container, but these files are really on the host.
See: Mount a host directory as data volume

Resources