Docker add files to VOLUME - docker

I have a Dockerfile which copies some files into the container and after that creates a VOLUME.
...
ADD src/ /var/www/html/
VOLUME /var/www/html/files
...
In the src folder is an files folder and in this files folder are some files I need to have copied to the VOLUME the first time the container gets started.
I thought the first time the container gets created it uses the content of the original dir specified in the volume but this is not the case.
So how can I get the files into this folder?
Do I need to create an extra folder and copy it with a runscript (I hope not)?

Whatever you put in your Dockerfile is just evaluated at build time (and not when you are creating a new container).
If you want to make file from the host available in your container use a data volume:
docker run -v /host_dir:/container_dir ...
In case you just want to copy files from the host to a container as a one-off operation you can use:
docker cp /host_dir mycontainer:/container_dir

The issue is with your ADD statement. Also you might not understand how volumes are accessed. Compare your efforts with the demo below:
FROM alpine #, or your favorite tiny image
ADD src/files /var/www/html/files
VOLUME /var/www/html/files
Build an image called 'dataimg':
docker build -t dataimg .
Use the dataimg image to create a data container named 'datacon':
docker run --name datacon dataimg /bin/cat
Mount the volume from datacon in your nginx container:
docker run --volumes-from datacon nginx ls -la /var/www/html/files
And you'll see the listing of /var/www/html/files reflects the contents of src/files

Related

Docker volume creates malfunction in container behaviour

i am trying to create a volume of a directory in a docker container (confluence).
https://hub.docker.com/r/atlassian/confluence-server/
To fix a bug with postgres, i have to add driver files manually in the container. The location inside the container is:
/opt/atlassian/confluence/confluence/WEB-INF/lib
After creating the volume i wanted to add the newer driver into the directory. So, inside my docker-compose.yaml i mapped a volume to the directory.
- ./data/driverfiles:/opt/atlassian/confluence/confluence/WEB-INF/lib
Volume and directory get created after calling docker-compose up and everything seems fine.
Problem is, that the volume remains empty and when starting an interactive shell into the container, the once filles with thousands of files directory is empty, too. When removing the volume from the docker-compose.yaml, the directory is full of files again.
Objectively, it looks like mapping the volume to this directory somehow prohibits the container of enriching it with files, what is going on here?
If you're mounting a host directory over a container directory, at container startup time, this is always a one-way operation: whatever is in the host directory (if anything) completely replaces whatever might have been in the image. Content from a container will never get copied into a host directory unless the image startup code explicitly does this for you.
If you need to modify a configuration file in the container, you need to first copy it out of the image; for example
# with the volumes: mount deleted
docker-compose run confluence \
sh -c 'cd /opt/atlassian/confluence/confluence/WEB-INF/lib && tar cf - .' \
| tar xf - .
That particular invocation will copy the entire directory out to the host, where you can mount it in again.
Note that if there's an updated image that changes the contents of this lib directory, the content you have on the host will always take precedence; this hides any changes that might be made in the image.
You might find it more reliable to build a custom image that adds the driver files you need
FROM ...conflunce:...
COPY ... /opt/atlassian/confluence/confluence/WEB-INF/lib
# Use CMD and all other options from the original image
Specify build: . (and no image:) in the docker-compose.yml file to use this Dockerfile.

How to copy from volume mapped opt to image opt folder in docker?

Assuming I have a docker image
FROM openjdk:8-jdk-slim
WORKDIR /opt
COPY localfile ../imagefile
I can create my docker image docker build -t my-image . and have my local localfile not the image as ../imagefile.
I can also do this interactively by
Run docker run -it --name my-container --volume=$(pwd):/opt --workdir=/opt openjdk:8-jdk-slim
Then cp localfile ../imagefile
Then exit
Then create the image by running docker commit my-container my-image
Both produce the equivalent my-image.
However, if I change my Dockerfile to below
FROM openjdk:8-jdk-slim
WORKDIR /opt
COPY localfile imagefile
I can build the image using the docker build -t my-image .. However, I cannot cp localfile imagefile, because the cp will only copy the file to the original disk volume folder mapped to opt and not the image actual opt folder.
Is there a way still copy my file to the image opt folder (and not the local one), when my opt is mapped to the local folder?
Or another way of asking, is there equivalent COPY command I can use when I'm running the container interactively to create the image?
There's two important details around this question:
If you mount something into a Docker container (or for that matter on your Linux host), it hides whatever was in that directory before; you can't directly see or access it.
In Docker in particular, you can't change mounts without deleting and recreating the container, and thereby removing the container filesystem.
So on the one hand, you can't copy from a mounted volume to the container filesystem at the same location; on the other, even if you could, there's (almost) no way to see the contents of the filesystem.
(As you note, docker commit will create an image of the container filesystem, ignoring volumes, so that will see this. Using docker commit isn't really a best practice, though; building images via Dockerfiles as you've shown and using docker build is almost always preferred.)
In general I've found volume mounts useful for three things, where hiding the container content is acceptable or even expected. You can use mounts to inject config files into containers (where you want to overwrite the image's copy). You can use mounts to read log files back out (where you don't care what the image started with). If you have a stateful workload like a database, you can use a mount to hold the data that needs to be persisted (so it outlives the container filesystem).

Docker image mounting the existing data volume for nexus

I have added new repositories on nexus 3.17. There is no data, just repos.
Created .bak files
Created a docker image for nexus(call it newimage).
I have data from the current version(3.3.1) on a volume and I want the data to show up in the new nexus. When I try below the docker run command and go to nexus, new repositories do not show up, but data is there for old repos.
docker run -d -p 8081:8081 --name nexus -v <local-docker-volume>:/nexus-data newimage
The Dockerfile I use to create an image
FROM sonatype/nexus3:3.17.0
COPY path-to-bak-files/*.bak /nexus-data/restore-from-backup/
Any idea what I am doing wrong?
p.s: let me know if I am not clear.
As per your dockerfile you are copying the contents to /nexus-data/restore-from-backup/ but while running the container you are mounting the existing volume on /nexus-data which ends up masking the /nexus-data directory on file system inside the container (where you added the data while image creation).
Important thing to note in mount operation is that if you mount another disk/share (in this case volume) on an existing directory on your file system(FS), you can't access the dir from your FS anymore. Thus, when you created docker image, you added some files to /nexus-data/restore-from-backup/ but when you mounted volume on /nexus-data, you mounted it over dir in your FS so you can't see files now from your FS.
To address the issue, you can do the following:
add the data to a different location in dockerfile say location1
create container with volume mount as you are currently doing
use entrypoint OR command to copy the files from location1 to /nexus-data/restore-from-backup/

Docker mount volume to reflect container files in host

The use case is that I want to download and image that contains python code files. Assume the image does not have any text editor installed. So i want to mount a drive on host, so that files in the container show up in this host mount and i can use different editors installed on my host to update the code. Saving the changes are to be reflected in the image.
if i run the following >
docker run -v /host/empty/dir:/container/folder/with/code/files -it myimage
the /host/empty/dir is still empty, and browsing the container dir also shows it as empty. What I want is the file contents of /container/folder/with/code/files to show up in /host/empty/dir
Sébastien Helbert answer is correct. But there is still a way to do this in 2 steps.
First run the container to extract the files:
docker run --rm -it myimage
In another terminal, type this command to copy what you want from the container.
docker cp <container_id>:/container/folder/with/code/files /host/empty/dir
Now stop the container. It will be deleted (--rm) when stopped.
Now if you run your original command, it will work as expected.
docker run -v /host/empty/dir:/container/folder/with/code/files -it myimage
There is another way to access the files from within the container without copying it but it's very cumbersome.
Your /host/empty/dir is always empty because the volume binding replaces (overrides) the container folder with your empty host folder. But you can not do the opposite, that is, you take a container folder to replace your host folder.
However, there is a workaround by manually copying the files from your container folder to your host folder. before using them as you have suggested.
For exemple :
run your docker image with a volume maaping between you host folder and a temp folder : docker run -v /host/empty/dir:/some-temp-folder -it myimage
copy your /container/folder/with/code/files content into /some-temp-folder to fill you host folder with you container folder
run you container with a volum mapping on /host/empty/dir but now this folder is no longer empty : run -v /host/empty/dir:/container/folder/with/code/files -it myimage
Note that steps 1 & 2 may be replaced by : Copying files from Docker container to host

-v deleted all the data from the docker container

I made a docker image called myImage, there is a folder: /data I want to let the user edit it by themselves. I read that -v flag can mount the volume, so I used it like following:
I run the container with this command:
docker run -v /my_local_path:/data -it myImage /bin/bash
But surprisingly, docker cleared all the files in /data in the container. But this is not I want... I want actually the host can get all the files from /data... :(
How can I do that?
When you share a volume like this, the volume on the host overwrites the volume in the container, so the files in the container's folder will be removed.
What you need to do is put the files in the container in folder A (a folder in the container). Mount folder B (another folder in the container). Then AFTER the volume is mounted, move the files from folder A to folder B. Then these files will be both available to the host and inside the container.
You can achieve this 'move files' operation using a RUN or an ENTRYPOINT script in your Dockerfile.
See Run a script in Dockerfile
Sorry, I forget if you need RUN or ENTRYPOINT (or if either will work) but one of these will definitely do it.
I think you want ENTRYPOINT because an ENTRYPOINT script runs AFTER the container is created. Thus it will run after the volume is mounted.

Resources