I use the command:
docker run -it -v /myhostpath:/dockerpath $container
so I get a mapping from a host dir to a docker dir, and I do see the files shared by both dirs.
After that, I exit the container, commit and save the image.
However, when I open the container again, the shared files are gone.
Could anyone tell me what is happening? Thanks!
The -v option is like using the mount command to mount an external file system. Directories mapped into a container with -v are not in the container's filesystem. Therefore committing the changes in the container's filesystem to a new image does not include these external files.
If you want to copy some external files to the container, you would need to use -v to temporarily mount the directory and then use a cp command to copy to a directory local to the container, and then commit.
Or, you could include the files using ADD or COPY in your Dockerfile
If the external files can be thought of as essential input or output for the container, that is separate from the container and could be needed elsewhere (log files, calculation results, database data, etc), or are supposed to be kept secret, then you should not copy them to the container. Instead, continue using -v to mount that directory on the container.
Related
I have a docker image, and I am running it now (finishing with bash)
When I do, I have a file structure inside the container.
However, this is not some file structure mapped (with -v) from outside the container. These files and folders exist only inside the container.
My question is, since it is bothersome to be opening each file with vi and navigating from the terminal, is there a way that I can open vscode on these files?
Be aware that these files do not exist outside the container
I found how to do it from this link
However I used the "attach to running container" command
I rarely do that but when I have to I usually mount an empty volume to the container, then exec into the container copy the folder which I need into that empty volume, which is then replicated on my host machine. From my host machine I then open it in vscode.
However please be careful if you have sensitive information in that container, not to expose something by accident.
So the steps are:
Create empty volume ( docker-compose example )
Note do not overwrite the folder/file which you want to extract. containerpath is path which does not exist in the container prior to creating it.
volume:
- ./hostpath:/containerpath
Find docker id so that you can use it to exec into it:
docker ps
Exec into the container:
docker exec -it <container_id> /bin/sh
Copy the file/folder to that empty volume:
cp -r folder containerpath
Exit the container and look at your files in ./hostpath folder.
I have a directory in my Docker container, and I'm trying to make it available locally using -v screenshots:/srv/screenshots in my docker run command but it's not available.
Do I need to add something else to my command?
Host volumes are mapped from the host into the container, not the other way around. This is one way to have persistent storage (so the data don't disappear when the container is re-created).
You can copy the screenshot folder to your host with docker cp and map them in.
You will have your screenshots in the local screenshots folder. Mapping them in with -v screenshots:/srv/screenshots makes them appear in /srv/screenshots in the container, but these files are really on the host.
See: Mount a host directory as data volume
I have a situation where:
I want to mount a directory ~/tmp/mycode to /mycode readonly
I want to be able to edit the files in the directory, so I can't just run -v /my/local/path/tmp/mycode:/mycode
I want it to not persist changes on the host filesystem though so I can't mount it read/write
~/tmp/mycode is rather large
Basically I want to be able to edit the files in the mounted volume but not have those changes persisted.
My current workflow is to create a dummy container using a dockerfile:
ADD . /mycode
and then execute that container.
However as the repository grows, this step takes longer and longer to perform, because the only way I can think is to make a complete copy of ~/tmp/mycode in order to be able to manipulate the files in the container.
I've also thought about mounting the directory and copying it inside the container and committing that container, but that has the same issue.
Is there a way to run a docker container to allow file edits without persisting them on the host short of copying the whole directory?
I am using the latest docker for mac, currently Version 17.03.1-ce-mac5 (16048).
This is fairly trivial to do with docker and overlay:
docker run --name myenv --privileged -v /my/local/path/tmp/mycode:/mnt/rocode:ro -it ubuntu /bin/bash
docker exec -d myenv /sbin/mount -t overlay overlay -o lowerdir=/mnt/rocode,upperdir=/mycode,workdir=/mnt/code-workdir /mycode
This should mount the code from your directory read only and create the overlay inside the container so that /mnt/rocode is read only, but /mycode is writable.
Make sure that your kernel is 3.18+ and that you have overlay in your /proc/filesystems.
I am trying to capture the state of a docker container as an image, in a way that includes files I have added to a volume within the container. So, if I run the original container in this way:
$ docker run -ti -v /cookbook ubuntu:14.04 /bin/bash
root#b78f3599d936:/# cd cookbook
root#b78f3599d936:/cookbook# touch foo.txt
Now, if I either export, or commit the container as a new docker image, and then run a container from the new image, then the file, foo.txt is never included in the /cookbook directory.
My question is whether there is a way to create an image from a container in a way that allows the image to include file content within its volumes.
whether there is a way to create an image from a container in a way that allows the image to include file content within its volumes?
No, because volume is designed to manage data inside and between your Docker containers, it's used to persist and share data. What's in image is usually your program(artifacts, executables, libs. e.g) with its whole environment, building/updating data to image does not make much sense.
And in docs of volumes, they told us:
Changes to a data volume will not be included when you update an image.
Also in docs of docker commit:
The commit operation will not include any data contained in volumes mounted inside the container.
Well, by putting the changes in a volume, you're excluding them from the actual container. The documentation for docker export includes this:
The docker export command does not export the contents of volumes associated with the container. If a volume is mounted on top of an existing directory in the container, docker export will export the contents of the underlying directory, not the contents of the volume.
Refer to Backup, restore, or migrate data volumes in the user guide for examples on exporting data in a volume.
This points to this documentation. Please follow the steps there to export the information stored in the volume.
You're probably looking for something like this:
docker run --rm --volumes-from <containerId> -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /cookbook
This would create a file backup.tar with the contents of the container's /cookbook directory and store it in the current directory of the host. You could then use this tar file to import it in another container.
Essentially, there are three ways to do persistence in Docker:
You can keep files in a volume, which is a filesystem managed by Docker. This is what happens in your example: because the /cookbook directory is part of a volume, your file does not get commited/exported with the image. It does however get stored in the volume, so if you remount the same volume in a different container, you will find your file there. You can list your volumes using docker volume ls. As you can see, you should probably give your volumes names if you plan to reuse them. You can mount an existing volume, or create a new one, if the name does not exist, with
docker run -v name:/directory ubuntu
You can keep files as part of the image. If you commit the container, all changes to its file hierarchy are stored in the new image except those made to mounted volumes. So if you just get rid of the -v flag, your file shows up in the commit.
You can bind mount a directory from the host machine to the container, by using the -v /hostdir:/targetdir syntax. The container then simply has access to a directory of the host machine.
Docker commit allows you to create an image from a container and its data (mounted volumes will be ignored)
This application I'm trying to Dockerize has configuration files in the root of the install dir. If I use VOLUME to mount the install dir on the host, I'll end up with the application on the host, too. I only want to store the configuration files on the host.
Should I use hard links in the container and use VOLUME to mount the dir that has the hardlinks? Do hard links even work in a container?
You can mount individual files. Below is from the docker documentation https://docs.docker.com/engine/userguide/containers/dockervolumes/
Mount a host file as a data volume
The -v flag can also be used to mount a single file - instead of just
directories - from the host machine.
$ docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash
This will drop you into a bash shell in a new container, you will have
your bash history from the host and when you exit the container, the
host will have the history of the commands typed while in the
container.
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device
or resource busy”. In the case where you want to edit the mounted
file, it is often easiest to instead mount the parent directory.