How do I copy files from a docker container to the host machine during docker build command?
As a part of building my docker image for my app, I'm running some tests inside it, and I would like to copy the output of the test run into the host (which is a continuous integration server) to do some reporting.
I wouldn't run the tests during build, this will only increase the size of your image. I would recommend you to build the image and then run it mounting a host volume into the container and changing the working directory to the mount point.
docker run -v `pwd`/results:/results -w /results -t IMAGE test_script
There is no easy way to do this. Volumes are only created at run time. You can grab it out of the docker filesystem, (e.g. mine is /var/lib/docker/devicemapper/mnt/CONTAINER_ID/rootfs/PATH_TO_FILE) though there is no good way to figure out when your test process is complete. You could create a file when it's finished and do an inotify, but this is ugly.
Related
This is basically a follow-up question to How to include files outside of Docker's build context?: I'm using large files in all of my projects (several GBs) which I keep on an external drive, only used for development.
I want to COPY or ADD these files to my docker container when building it. The answer linked above allows one to specify a different path to a Dockerfile, potentially extending the build context. I find this unpractical, since this would require setting the build context to system root (?), to be able to include a single file.
Long story short: Is there any way or workaround to include a file that is far removed from the docker build context?
Three suggestions on things you could try:
include a file that is far removed from the docker build context?
You could construct your own build context by cp (or tar) files on the host into a dedicated directory tree. You don't have to use the actual source tree or your build tree.
rm -rf docker-build
mkdir docker-build
cp -a Dockerfile build/the-binary docker-build
cp -a /mnt/external/support docker-build
docker build ./docker-build
# reads docker-build/Dockerfile, and the files in the
# docker-build directory, but nothing else; only sends
# the docker-build directory to Docker as the build context
large files [...] (several GBs)
Docker doesn't deal well with build contexts this large. In the past I've at least seen docker build take a long time just on the step of sending the build context to itself, and docker push and docker pull have network issues when trying to send the gigabyte+ layer around.
It's a little hacky and breaks the "self-contained image" model a little bit, but you can provide these files as a Docker bind-mount instead of including them in the image. Your application needs to know what to do if the data isn't there. When you go to deploy the application, you also need to separately distribute the files alongside the Docker image and other deployment artifacts.
docker run \
-v /mnt/external/support:/app/support
...
the-image-without-the-support-files
only used for development
Potentially you can get away with not using Docker at all during this phase of development. Use a local source tree and local development tools; run your unit tests against these large test fixtures as needed. Build a Docker image only when you're about to run pre-commit integration tests; that may be late enough in the development cycle that you don't need these files.
I think the main thing you are worried about is that you do not want to send all files of a directory to docker daemon while it builds the image.
When directory was so big (in GBss) it takes lot of time to build an image.
If the requirement is to just use those files while you build anything inside docker, you can mount those to the container.
A tricky way
Run a container with base image and mount the direcotries inside it. docker run -d -v local-path:container-path
Get inside the container docker exec -it CONTAINER_ID bash
Run build step ./build-something.sh
Create image from the running container docker commit CONTAINER_ID
Tag the image docker tag IMAGE_ID tag:v1. You can get Image ID from previous command
From long term perspective this method may seem to be very tedious, but if you want to build image for 1 or 2 times , you can try this method.
I tried this for one of my docker image, as I want to avoid large amount of files sent to docker daemon during image build
The copy command gets source and destination values,
just specify full absolute path to your hard drive mount point as the src directory
COPY /absolute_path/to/harddrive /container/path
I've moved my docker-compose container from the development machine to a server using docker save image-name > image-name.tar and cat image-name.tar | docker load. I can see that my image is loaded by running docker images. But when I want to start my server with docker-compose up, it says that there isn't any docker-compose.yml. And there isn't really any .yml file. So how to do with this?
UPDATE
When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
What you achieve with docker save image-name > image-name.tar and cat image-name.tar | docker load is that you put a Docker image into an archive and extract the image on another machine after that. You could check whether this worked correctly with docker run --rm image-name.
An image is just like a blueprint you can use for running containers. This has nothing to do with your docker-compose.yml, which is just a configuration file that has to live somewhere on your machine. You would have to copy this file manually to the remote machine you wish to run your image on, e.g. using scp docker-compose.yml remote_machine:/home/your_user/docker-compose.yml. You could then run docker-compose up from /home/your_user.
EDIT: Additional info concerning the updated question:
UPDATE When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
Personally, I have never used this approach of transferring a Docker image (but it's cool, didn't know it). What you typically would do is pushing your image to a Docker registry (either the official DockerHub one, or a self-hosted registry) and then pulling it from there.
In my docker-compose, I am mounting a local folder to a folder for Docker. I can see and use the mounted volume with CMD in the Dockerfile, but not with RUN. RUN seems to be a totally clean layer from the docs. Is there a way to have RUN be able to use mount points specified in the docker-compose file?
You can't mount volumes during the docker build process, regardless of whether you use docker itself, docker-compose, or some other tool. The whole idea is that the build process is supposed to be as indepdent of your environment as possible, so that the resulting images have no dependencies on your local system and can be more easily shared.
There are generally alternate ways of approaching whatever problem you're trying to solve that do not require trying to expose data into your build process.
Actually there's a big difference between CMD and RUN
CMD is used to provide arguments or command which is execute when you start the container
RUN is used to provide a command to execute to create a new layer.
In short: volumes are not available during build step (when RUN is executed).
Docker containers have two ways of providing "external" files:
In build step, context is passed.
In run step, container layer + volumes are used.
See for CMD:
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
See for RUN:
https://docs.docker.com/engine/reference/builder/#run
For context see:
https://docs.docker.com/engine/reference/builder/#usage
(docker-compose) https://docs.docker.com/compose/compose-file/#build
So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.
I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.
So how can I have a folder /var/my_folder on my host machine which is also available in my container?
I'm working on Linux Mint.
I appreciate your help.
Thanks. :)
Link : Manage data in containers
The basic run command you want is ...
docker run -dt --name containerName -v /path/on/host:/path/in/container
The problem is that mounting the volume will, (for your purposes), overwrite the volume in the container
the best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.
The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.
The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.
Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script.