Docker copy files to and mount same folder - docker

I am working on 'tomcat:7.0.75-jre8-alpine' base image and want to deploy my web application alongwith its configurations file. Below is what I am doing in Dockerfile:
......
COPY <my-app-configurations> /org/app/data
COPY <my-app-configurations> /org/app/conf
......
CMD ["catalina.sh", "run"]
And I am using below command to create a container from above image:
$ docker run -p 8080:8080 -v "/c/Users/jaffy/app:/org/app" myapp-image
'/c/Users/jaffy/app' folder is initially empty and I want to get all contents of '/org/app' in it and remains in-sync.
Initially, all configurations are copied in '/org/app' folder but when '/c/Users/jaffy/app' is mounted, '/org/app' gets cleaned/emptied.
How can I solve this issue that host machine folder remains empty initially but afterwards it reflects the exact state of container's '/org/app' folder and its sub-directories.
Thanks a lot in advance.

There is no way to keep the container's folder content when you mount/share a volume over that folder, because the command is replacing the folder not merging it.
If you don't need to replace all the files in the folder you could just mount the file that need to be changed like:
$ docker run -p 8080:8080 -v "/c/Users/jaffy/app/web.xml:/org/app/web.xml" my-app-image

Related

Is there a way to copy the contents of the directory into the Docker container?

Pretty much the title says it all.
I know I can copy the file (from the host) into a docker container.
I also know I can copy the directory into a docker container.
But how to copy the contents of a directory (preserving all subdirectories) into a directory in a docker container?
On my host I have a directory called src. On the docker container I have a directory /var/www/html. That src has both files and directories. I need all of them to be copied (with the command) into the container; not bound, not mounted, but copied.
It sounds like a trivial operation, but I've tried so many ways and couldn't find anything online that works! Ideally, it would be best if that copy operation would work every time I run the docker-compose up -d command.
Thanks in advance!
I found the solution. There is a way of specifying the context directory explicitly; in that case the dockerfile also need to be specified explicitly too.
In the docker-compose.yml one should have the following structure:
services:
php:
build:
context: .
dockerfile: ./php/Dockerfile
In this case the src is "visible" because it is inside the context! Then in that Dockerfile the COPY command will work!
Update: There is another way to achieve this via the command as well. However for me it started to work when I've added the ./ at the end. So the full command is:
docker cp ./src/./ $(docker-compose ps|grep php|awk '{print $1}'):/var/www/html/

Nothing happens when copying file with Dockerfile

I use docker-compose for a simple keycloak container and I've been trying to install a new theme for keycloak.
However I've been unable to copy even a single file to the container usnig a Dockerfile. The Dockerfile and docker-compose.yml are in the same directory
Neither of these commands work or cause any events or warnings in the logs.
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
Copying manually with
sudo docker cp test docker_keycloak_1:/tmp
works without any issues.
Quick understanding on Docker.
docker build: Create an image from a Dockerfile
docker run: Create a container from an image.
(you can create yourself the image or use a existing image from docker hub )
Based on what you said, you have 2 options.
Create a new docker image based on the existing one and add the theme.
something like
# Dockerfile
FROM jboss/keycloak
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
and then use docker build to create your new image
Volume the theme in the correct directory
using docker-compose volume
version: '3'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
volumes:
- "./docker/kctheme/theme/login:/opt/jboss/keycloak/themes/keycloak/login"
Files have to be in the same directory or a subdirectory of the directory with your Dockerfile build file if you use COPY, and have to be present at build time. No absolute paths.
/tmp as destination is also a bit tricky, because the startup process of the container might have a /tmp cleanout, which means that you would never see that file in a running container.

Docker volumes not mounting/linking

I'm in Docker Desktop for Windows. I am trying to use docker-compose as a build container, where it builds my code and then the code is in my local build folder. The build processes are definitely succeeding; when I exec into my container, the files are there. However, nothing happens with my local folder -- no build folder is created.
docker-compose.yml
version: '3'
services:
front_end_build:
image: webapp-build
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
volumes:
- "./build:/srv/build"
Dockerfile
FROM node:8.10.0-alpine
EXPOSE 5000
# add files from local to container
ADD . /srv
# navigate to the directory
WORKDIR /srv
# install dependencies
RUN npm install --pure-lockfile --silent
# build code (to-do: get this code somewhere where we can use it)
RUN npm run build
# install 'serve' and launch server.
# note: this is just to keep container running
# (so we can exec into it and check for the files).
# once we know that everything is working, we should delete this.
RUN npx serve -s -l tcp://0.0.0.0:5000 build
I also tried removing the final line that serves the folder. Then I actually did get a build folder, but that folder was empty.
UPDATE:
I've also tried a multi-stage build:
FROM node:12.13.0-alpine AS builder
WORKDIR /app
COPY . .
RUN yarn
RUN yarn run build
FROM node:12.13.0-alpine
RUN yarn global add serve
WORKDIR /app
COPY --from=builder /app/build .
CMD ["serve", "-p", "80", "-s", "."]
When my volumes aren't set (or are set to, say, some nonexistent source directory like ./build:/nonexistent), the app is served correctly, and I get an empty build folder on my local machine (empty because the source folder doesn't exist).
However when I set my volumes to - "./build:/app" (the correct source for the built files), I not only wind up with an empty build folder on my local machine, the app folder in the container is also empty!
It appears that what's happening is something like
1. Container is built, which builds the files in the builder.
2. Files are copied from builder to second container.
3. Volumes are linked, and then because my local build folder is empty, its linked folder on the container also becomes empty!
I've tried resetting my shared drives credentials, to no avail.
How do I do this?!?!
I believe you are misunderstanding how host volumes work. The volume definition:
./build:/srv/build
In the compose file will mount ./build from the host at /srv/build inside the container. This happens at run time, not during your image build, so after the Dockerfile instructions have been performed. Nothing from the image is copied out to the host, and no files in the directory being mounted in top of will be visible (this is standard behavior of the Linux mount command).
If you need files copied back out of the container to the host, there are various options.
You can perform your steps to populate the build folder as part of the container running. This is common for development. To do this, your CMD likely becomes a script of several commands to run, with the last step being an exec to run your app.
You can switch to a named volume. Docker will initialize these with the contents of the image. It's even possible to create a named bind mount to a folder on your host, which is almost the same as a host mount. There's an example of a named bind mount in my presentation here.
Your container entrypoint can copy the files to the host mount on startup. This is commonly seen on images that will run in unknown situations, e.g. the Jenkins image does this. I also do this in my save/load volume scripts in my example base image.
tl;dr; Volumes aren't mounted during the build stage, only while running a container. You can run the command docker run <image id> -v ./build/:/srv/build cp -R /app /srv/build to copy the data to your local disk
While Docker is building the image it is doing all actions in ephemeral containers, each command that you have in your Dockerfile is run in a separate container, each making a layer that eventually becomes the final image.
The result of this is that the data flow during the build is unidirectional, you are unable to mount a volume from the host into the container. When you run a build you will see Sending build context to Docker daemon, because your local Docker CLI is sending the context (the path you specified after the docker build, ususally . which represents the current directory) to the Docker daemon (the process that actually does the work). One key point to remember is that the Docker CLI (docker) doesn't actually do any work, it just sends commands to the Docker Daemon dockerd. The build stages shouldn't change anything on your local system, the container is designed to encapsulate the changes only into the container image, and give you a snapshot of the build that you can reuse consistently, knowing that the contents are the same.

Docker COPY command not mounting a directory

Host OS: Linux
Container OS: Linux
I'm trying to learn how to use docker. I use docker-compose and I'm successfully building images and running containers.
Now if I want to mount some directory inside the container the documentation says that I should use the COPY command inside Dockerfile.
COPY /path/to/my/addons/ /path/to/directory/inside/container
Sadly when I compose this container the COPY command is ignored and my stuff from /path/to/my/addons doesn't make it to the container.
I've also tried with ADD command, but same problem.
Absolute paths
First, you can't use absolute paths for COPY. All paths must be inside the context of the build, which means relative to the Dockerfile. If the folder structure on your host is like this
my-docker-directory
-- Dockerfile
-- docker-compose.yml
-- addons
then you're able to use COPY addons /path/to/directory/inside/container. For all subsequent explanations, I assume that you have an addons folder relative to the Dockerfile.
Mounting a directory
COPY doesn't simply mount a folder to the container at runtime. It doesn't really mount the directory at all. Instead, addons is copied to /path/to/directory/inside/container inside the image. It's important to understand, that this process happens unidirectional (host > image) and only happens when the image is build.
COPY is designed to add dependencies to the image that were required during buildtime like source code that got compiled to binaries. That's the reason why you can't absolute paths. A Dockerfile usually is placed together with source code/config files at the top level area.
The build process of an image happens only on the first run, except you force it using docker-compose up --build. But it doesn't seem that this is what you want. To mount a directory from the host at runtime, use a volume in the docker-compose file:
version: '3'
services:
test:
build: .
volumes:
- ./addons/:/path/to/directory/inside/container
When to use COPY and when volumes?
It's important to realize that COPY and ADD will copy the stuff into the image at buildtime, where volumes mount them from the host at runtume (without including them in the image). So you usually copy general things to the image, that the users needs like default configuration files.
Volumes are required to include files from the host like customized configuration files. Or persistent things as the data-directory of a database. Without volumes those containers work, but are not persistent. So all content would get lost when the container restarts.
Please note that one doesn't exclude the other. It's fine to COPY a default configuration for some application in the image, where the user may override this with volumes to modify them. Especially during development this can make things easier because you don't have to rebuild the entire image for a single changed config file*
* Altough it's a good practice to optimize Dockerfiles for the integrated caching mechanism. If a Dockerfile is well written, rebuilding small config changes often doesn't take too long. But that's another topic out of this scope.
More detailled explanation with example
Basic setup with COPY in Dockerfile
As simple example, we create a Dockerfile from the nginx webserver image and copy html in it
FROM nginx:alpine
COPY my-html /usr/share/nginx/html
Lets create the folder with demo content
mkdir my-html
echo "Dockerfile content" > my-html/index.html
and add a minimalistic docker-compose.yml
version: '3'
services:
test:
build: .
If we run it for the first time using docker-compose up -d, the image got build and our test page is served:
root#server2:~/docker-so-example# docker-compose up -d
Creating network "docker-so-example_default" with the default driver
Creating docker-so-example_test_1 ... done
root#server2:~/docker-so-example# curl $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' docker-so-example_test_1)
Dockerfile content
Let's manipulate our testfile:
echo "NEW Modified content" > my-html/index.html
If we request our server with curl again, we get the old response:
root#server2:~/docker-so-example# curl $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' docker-so-example_test_1)
Dockerfile content
To apply our content, a rebuild is required:
docker-compose down && docker-compose up -d --build
Now we can see our changes:
root#server2:~/docker-so-example# curl $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' docker-so-example_test_1)
NEW Modified content
Use volumes in docker-compose
To show the difference, we use volumes by modifing our docker-compose.yml file like this:
version: '3'
services:
test:
build: .
volumes:
- ./my-html:/usr/share/nginx/html
Now restart the containers using docker-compose down && docker-compose up -d and try it again:
root#server2:~/docker-so-example# echo "Again changed content" > my-html/index.html
root#server2:~/docker-so-example# curl $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' docker-so-example_test_1)
NEW Modified content
root#server2:~/docker-so-example# echo "Some content" > my-html/index.html
root#server2:~/docker-so-example# curl $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' docker-so-example_test_1)
Some content
Notice that we didn't re-build the image and our modifications apply immediately. Using volumes, the files are not included in the image.
COPY command inside a docker file copies the content to the image while building. mounting a volume is a different thing. for mounting you need to use
docker run -v <volume_name>:<volume_name> ...
what exactly you want to achieve ? Do you want to see the folders inside containers in your host ?
Take your addon folder to location where your Dockerfile is and then run
mkdir -p /path/to/directory/inside/container
COPY ./addons/* /path/to/directory/inside/container

docker COPY is not copying the files

FROM nginx:alpine
EXPOSE 80
COPY . /usr/share/nginx/html
Am trying to run an Angular app with the following docker configuration. It does work, but I can't see the files/directory that was suppose to be copied in that location "/usr/share/nginx/html" which is super confusing. The directory only contains the default index.html nginx created.
Does it store it in memory or something since the files are not there but it does fetch my website properly.
Build:
docker build -t appname .
Run:
docker run -d -p 80:80 appname
It seems like the COPY destination path is not the path on disk server but its the path inside the image of the docker. Which explains why i cant see my files on the server disk.

Resources