How can I transfer files between a windows host machine and a linux docker image? - docker

I am running windows 10 and the most recent version of docker. I am trying to run a docker image and transfer files to and from the image.
I have tried using the "docker cp" command, but from what I've seen online, this does not appear to work for docker images. It only works for containers.
When searching for info on this topic, I have only seen responses dealing with containers, not for images.

A docker image is basically a template used for containers. If you add something to the image it will show up in all of the containers. So if you just want to share a single set of files that don't change you can add the copy command to your docker file, and then run the new image and you'll find the container.
Another option is to use shared volumes. Shared volumes are basically folders that exist on both the host machine and the running docker container. If you move a file on the host system into that folder it will be available on the container (and if you put something from the container into the folder on the container side you can access it from the host side).

Related

Docker: How to include big folder to container?

I am new to Docker and I have a Docker Compose setup with three different services. But I have a problem regarding file size in Docker.
In order to serve images to our users, our server (written in Java/Spring) looks to a local directory called Images, also this directory is used to save new images, this directory is almost 50 GB in size and I can't include it inside Docker Container because of size limitations.
I created an Images folder inside the container then tried to symlink between the Images in the host machine. But it also failed.
My question is, how can I give access to this folder inside the container?
There is a size limit to the Docker container known as base device size. The default value is 10GB.
You can increase this value by setting up storage-opt option to the docker run command. See https://docs.docker.com/engine/reference/commandline/run/#set-storage-driver-options-per-container
Or, if you are running it in docker-compose see https://docs.docker.com/compose/compose-file/compose-file-v2/#storage_opt

Is it possible to save file from docker container to host directly

I have a container that runs a Python script in order to download a few big files from Amazon S3. The purpose of this container is just to download the files so I have them on my host machine. Because these files are needed by my app (which is running in a separate container with a different image), I bind mount from my host to the app's container the directory downloaded from the first container.
Note 1: I don't want to run the script directly from my host as it has various dependencies that I don't want to install on my host machine.
Note 2: I don't want to download the files while the app's image is being built as it takes too much time to rebuild the image when needed. I want to pass these files from outside and update them when needed.
Is there a way to make the first container to download those files directly to my host machine without downloading them first in the container and then copying them to the host as they take 2x the space needed before cleaning up the container?
Currently, the process is the following:
Build the temporary container image and run it in order to download
the models
Copy the files from the container to the host
Cleanup unneeded container and image
Note: If there is a way to download the files from the first container directly to the second and override them if they exist, it may work too.
Thanks!
You would use a host volume for this. E.g.
docker run -v "$(pwd)/download:/data" your_image
Would run your_image and anything written to /data inside the container would actually write to the host in the ./download directory.

Docker for Windows - Export Volume Data

I created two docker containers with compose on Docker for Windows, using wordpress and mariadb. I've created a volume for wordpress that points to my PC's normal filesystem, but mariaDB's is still contained within the Hyper-V's Virtual Hard Disk.
The mount point is at /var/lib/docker/volumes/1995...ca3/_data
I've tried looking at previous answers, but the link that would explain how to backup, copy, or restore volumes redirects to a general volume explanation. Most plugins or scripts I've seen for Docker typically refers to a *nix environment.
Would anyone know of a modern method to export and import volumes mounted to Linux containers in Docker for Windows?
The way I normally do this is to start a container that mounts two volumes, the source volume and the destination volume, and I run a command in that container that copies the contents of one volume to another. I don't have a copy of windows at hand to find out how to copy all files recursively, but I'm sure it can do it quite easily.

How to keep dot files in docker container?

I have installed some software in a docker image. When I run the software, it creates some setting files (dot files) under the root home folder. The problem is docker container wipes those files when I quit the container.
Is there a way to keep those dot files after I quite containers? I know I can manually save the container into a image. But that is not an elegant solution. That means every time I used the container, I need to save it to a image.
Any better solutions?
Thanks!
A simple solution would be to use a volume.
docker volume create configuration
And then you just run each container with it.
docker run -d -v configuration:container_configuration_dir your_image_name
Left side of : is name of volume created with first command and right side is dir inside container where your dot files are created.
Keep in mind how mounts work and for more details check docker docs on volumes.

How can I use a local file on container?

I'm trying create a container to run a program. I'm using a pre configurate image and now I need run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Is there any way to do this reference with some docker command? Or using Dockerfile?
Yes, you can do this. What you are describing is a bind mount. See https://docs.docker.com/storage/bind-mounts/ for documentation on the subject.
For example, if I want to mount a folder from my home directory into /mnt/mydata in a container, I can do:
docker run -v /Users/andy/mydata:/mnt/mydata myimage
Now, /mnt/mydata inside the container will have access to /Users/andy/mydata on my host.
Keep in mind, if you are using Docker for Mac or Docker for Windows there are specific directories on the host that are allowed by default:
If you are using Docker Machine on Mac or Windows, your Docker Engine daemon has only limited access to your macOS or Windows filesystem. Docker Machine tries to auto-share your /Users (macOS) or C:\Users (Windows) directory. So, you can mount files or directories on macOS using.
Update July 2019:
I've updated the documentation link and naming to be correct. These type of mounts are called "bind mounts". The snippet about Docker for Mac or Windows no longer appears in the documentation but it should still apply. I'm not sure why they removed it (my Docker for Mac still has an explicit list of allowed mounting paths on the host).

Resources