Docker RUN layer has no mounted volumes - docker

In my docker-compose, I am mounting a local folder to a folder for Docker. I can see and use the mounted volume with CMD in the Dockerfile, but not with RUN. RUN seems to be a totally clean layer from the docs. Is there a way to have RUN be able to use mount points specified in the docker-compose file?

You can't mount volumes during the docker build process, regardless of whether you use docker itself, docker-compose, or some other tool. The whole idea is that the build process is supposed to be as indepdent of your environment as possible, so that the resulting images have no dependencies on your local system and can be more easily shared.
There are generally alternate ways of approaching whatever problem you're trying to solve that do not require trying to expose data into your build process.

Actually there's a big difference between CMD and RUN
CMD is used to provide arguments or command which is execute when you start the container
RUN is used to provide a command to execute to create a new layer.
In short: volumes are not available during build step (when RUN is executed).
Docker containers have two ways of providing "external" files:
In build step, context is passed.
In run step, container layer + volumes are used.
See for CMD:
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
See for RUN:
https://docs.docker.com/engine/reference/builder/#run
For context see:
https://docs.docker.com/engine/reference/builder/#usage
(docker-compose) https://docs.docker.com/compose/compose-file/#build

Related

Is it possible to specify a custom Dockerfile for docker run?

I have searched high and low for an answer to this question. Perhaps it's not possible!
I have some Dockerfiles in directories such as dev/Dockerfile and live/Dockerfile. I cannot find a way to provide these custom Dockerfiles to docker run. docker build has the -f option, but I cannot find a corresponding option for docker run in the root of the app. At the moment I think I will write my npm/gulp script to simply change into those directories, but this is clearly not ideal.
Any ideas?
You can't - that's not how Docker works:
The Dockerfile is used with docker build to build an image
The resulting image is used with docker run to run a container
If you need to make changes at run-time, then you need to modify your base image so that it can take, e.g. a command-line option to docker run, or a configuration file as a mount.

Dealing with data in Docker Containers with Gitlab-Ci

So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.

Sharing files between container and host

I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.
So how can I have a folder /var/my_folder on my host machine which is also available in my container?
I'm working on Linux Mint.
I appreciate your help.
Thanks. :)
Link : Manage data in containers
The basic run command you want is ...
docker run -dt --name containerName -v /path/on/host:/path/in/container
The problem is that mounting the volume will, (for your purposes), overwrite the volume in the container
the best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.
The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.
The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.
Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script.

Run commands on create a new Docker container

Is it possible to add instructions like RUN in Dockerfile that, instead of run on docker build command, execute when a new container is created with docker run? I think this can be useful to initialize a volume attached to host file system.
Take a look at the ENTRYPOINT command. This specifies a command to run when the container starts, regardless of what someone provides as a command on the docker run command line. In fact, it is the job of the ENTRYPOINT script to interpret any command passed to docker run.
I think you are looking for the CMD
https://docs.docker.com/reference/builder/#cmd
The main purpose of a CMD is to provide defaults for an executing
container. These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT
instruction as well.
Note: don't confuse RUN with CMD. RUN actually runs a command and
commits the result; CMD does not execute anything at build time, but
specifies the intended command for the image.
You should also look into using Data Containers see this excellent Blog post.
Persistent volumes with Docker - Data-only container pattern
http://container42.com/2013/12/16/persistent-volumes-with-docker-container-as-volume-pattern/

Copy a file from container to host during build process

How do I copy files from a docker container to the host machine during docker build command?
As a part of building my docker image for my app, I'm running some tests inside it, and I would like to copy the output of the test run into the host (which is a continuous integration server) to do some reporting.
I wouldn't run the tests during build, this will only increase the size of your image. I would recommend you to build the image and then run it mounting a host volume into the container and changing the working directory to the mount point.
docker run -v `pwd`/results:/results -w /results -t IMAGE test_script
There is no easy way to do this. Volumes are only created at run time. You can grab it out of the docker filesystem, (e.g. mine is /var/lib/docker/devicemapper/mnt/CONTAINER_ID/rootfs/PATH_TO_FILE) though there is no good way to figure out when your test process is complete. You could create a file when it's finished and do an inotify, but this is ugly.

Resources