Is there a way to use external volumes during the docker image build ?
I have a situation where I would like to use a configuration inside a external volume during the docker image build time. Is that possible?
(edited to reflect current Docker CLI behavior)
If by 'docker image build' you mean running a single 'docker build ...' command: no, there is no way to do that (at least, not in the most recent documentation that I have read). However, nothing prevents you from performing the step that needs the external volume using direct docker commands and then commit the container and tag it just as 'docker build' would. Assuming this is the last step in your build, put all other commands (that don't need the volume) into a Dockerfile and then do this:
tmp_img=`docker build .`
tmp_container=`docker run -d -v $my_ext_volume:$my_mount_path --entrypoint=(your volume-dependent build command here) $tmp_img`
docker wait "$tmp_container"
docker commit $tmp_container my_repo/image_tag:latest
docker rm "$tmp_container"
This does the same as having a RUN command in the Dockerfile, but with the added volume mount. The commit command in the example also tags the image.
It is a bit more complex if you need to have other Dockerfile commands after the volume-dependent one, but in most cases you can combine run commands and re-arrange your install in a way that leaves the manual run-with-volume command last, to keep things simple.
You can copy the file into the docker image (ADD) and rm as one of the last steps
podman is an alternative to Docker that has an api that is the same BUT also supports mounting volumes at buildtime.
I use this to load data into testdatabases without having to copy the data into the image first.
You can use ADD combined with ARG (build time parameters) to access files or directories during the build without having to hardcode their location.
ARG MAVEN_SETTINGS=settings.xml
ADD $MAVEN_SETTINGS ./
And now you can change the file location during the build with:
docker build --build-arg MAVEN_SETTINGS=someotherfile.xml
We are not restricted to Docker to build OCI images.
With buildah it's possible to mount volumes from the host that won't be persisted in the final image. Useful for configuration and secrets.
buildah bud --volume /home/test:/myvol:ro -t imageName .
Related
I have a Dockerfile that I use to build the same image but for slightly different purposes. Most of the time I want it to just be an "environment" without a specific entrypoint so that the user just specifies that on the Docker run line:
docker run --rm -it --name ${CONTAINER} ${IMAGE} any_command parameters
But for some applications I want users to download the container and run it without having to set a command.
docker build -t ${IMAGE}:demo (--entrypoint ./demo.sh) <== would be nice to have
Yes, I can have a different Dockerfile for that, or append an entrypoint to the basic Dockerfile during builds, or various other mickey-mouse hacks, but those are all just one more thing that can go wrong, adding complexity, and are workarounds for the essential requirement.
Any ideas? staged builds?
The Dockerfile CMD directive sets the default command. So if your Dockerfile ends with
CMD default_command
then you can run the image in multiple ways
docker run "$IMAGE"
# runs default_command
docker run "$IMAGE" any_command parameters
# runs any_command instead
A container must run a command; you can't "just run a container" with no process in it.
You do not want ENTRYPOINT here since its syntax is noticeably harder to work with at the command line. Your any_command would be passed as arguments to the entrypoint process, rather than replacing the built-in default.
Is it possible to have multiple Dockerfile's with different extensions to link some services separately or in other use cases. For example:
\Dockerfile.web \Dockerfile.celery
and why?
You may have multiple Dockerfiles but you can only run one at a time when building an image. By default docker build looks for a file named Dockerfile. To specify another file use the -f flag.
docker build // uses Dockerfile
docker build -f Dockerfile // does the same as when run without -f
docker build -f Dockerfile.web // uses Dockerfile.web
https://docs.docker.com/engine/reference/commandline/build/
I have a docker container which I use to build software and generate shared libraries in. I would like to use those libraries in another docker container for actually running applications. To do this, I am using the build docker with a mounted volume to have those libraries on the host machine.
My docker file for the RUNTIME container looks like this:
FROM openjdk:8
RUN apt update
ENV LD_LIBRARY_PATH /build/dist/lib
RUN ldconfig
WORKDIR /build
and when I run with the following:
docker run -u $(id -u ${USER}):$(id -g ${USER}) -it -v $(realpath .):/build runtime_docker bash
I do not see any of the libraries from /build/dist/lib in the ldconfig -p cache.
What am I doing wrong?
You need to COPY the libraries into the image before you RUN ldconfig; volumes won't help you here.
Remember that first you run a docker build command. That runs all of the commands in the Dockerfile, without any volumes mounted. Then you take that image and docker run a container from it. Volume mounts only happen when the docker run happens, but the RUN ldconfig has already happened.
In your Dockerfile, you should COPY the files into the image. There's no particular reason to not use the "normal" system directories, since the image has an isolated filesystem.
FROM openjdk:8
# Copy shared-library dependencies in
COPY dist/lib/libsomething.so.1 /usr/lib
RUN ldconfig
# Copy the actual binary to run in and set it as the default container command
COPY dist/bin/something /usr/bin
CMD ["something"]
If your shared libraries are only available at container run-time, the conventional solution (as far as I can tell) would be to include the ldconfig command in a startup script, and use the dockerfile ENTRYPOINT directive to make your runtime container execute this script every time the container runs.
This should achieve your desired behaviour, and (I think) should avoid needing to generate a new container image every time you rebuild your code. This is slightly different from the common Docker use case of generating a new image for every build by running docker build at build-time, but I think it's a perfectly valid use case, and quite compatible with the way Docker works. Docker has historically been used as a CI/CD tool to streamline post-build workflows, but it is increasingly being used for other things, such as the build step itself. This naturally means people are coming up with slightly different ways of using Docker to facilitate various new and different types of workflow.
How can I add a file from my project into a Docker using in a gitlab-ci job. Suppose I have below job in my .gitlab-ci.yml .
build:master:
image: ubuntu:latest
script:
- cp sample.txt /sample.txt
stage: build
only:
- master
How to copy a sample.txt inside Ubuntu image? I was thinking as it is already a running container so we can't perform copy command directly but have to run
docker cp sample.txt mycontainerID:/sample.txt
but again how will I get mycontainerID? because it will be running inside a Gitlab runner and any random id will be assigned for every run. Is my assumption is wrong?
The file is already inside the container. If you read carefully through the CI/CD build log, you will see at the very top after pulling the image and starting it, your repository is cloned into the running container.
You can find it under /builds/<organization>/<repository>
(note that these are examples, and you have to adjust to your actual organization and repository name)
Or with the variable $CI_PROJECT_DIR
In fact, that is the directory you are in when starting the job.
For example, this .gitlab-ci.yml:
image: alpine
test:
script:
- echo "the project directory is - $CI_PROJECT_DIR"
- cat $CI_PROJECT_DIR/README.md ; echo
- echo "and where am I? - $PWD"
returns this pipeline output:
As you can see, I could print out the content of the README.md, inside the container.
We do not need to copy. The repository files will be available in the image. GitLab does that for us.
Type to use ls(linux) or dir(windows) command depending your platform to list files and folders.
Your runner is already executing script in your docker container.
What your job does here is:
start a container using Ubuntu image and mount your Git project in
there.
cp sample.txt from Git project's root to container's
stop the container saying "job done"
That's basically what image means: use this docker image to start a container that will execute the commands listed in the script part.
I don't exactly understand what you're trying to achieve. If it's a build job, then why don't you actually COPY your file from your Dockerfile and configure your job to build it with docker build ? A Runner shell executor doing docker build -t your/image:latest -f build/Dockerfile . will do just fine. Then you push this image in some Docker registry (Gitlab's for example, or Docker Hub).
If really your goal is more complex and you want to just add a file to a running container, you can use the same Runner (with a shell executor, not a docker one, so no image) and run something like
- docker run --name YOUR_CONTAINER_NAME -v $PWD:/mnt ubuntu:latest cp /mnt/sample.txt /sample.txt
- docker commit -m "Commit Message" -a "You" YOUR_CONTAINER_NAME your/image:latest
- docker push your/image:latest
- docker rm YOUR_CONTAINER_NAME
Note: I'm not 100% sure the first one would work, but that would be the general idea of creating an image from a container without relying on the actual Dockerfile if really you can't achieve your goal with a Dockerfile.
When I run the command:
docker run dockerinaction/hello_world
The first time the following scenario plays out:
The dockerinaction/hello_world Dockerfile can be seen below:
FROM busybox:latest
CMD ["echo", "hello world"]
So from the wording:
Docker searches Docker Hub for the image
There are several things I'm curious about:
Is the image dockerinaction/hello_world?
Does this image reference another image named busybox:latest?
What about the Dockerfile is that on my machine somewhere?
Answers to each bulleted question, in corresponding order:
Yes, the image is dockerinaction/hello_world.
Yes, the image does reference busybox:latest, and builds upon it.
No, the Dockerfile is not stored on your machine. The docker run command is downloading a compressed version of the built Docker image that it found on Docker Hub. In some ways, you can think of the Dockerfile as the source code and the built image as the binary.
If you wanted to, you could write your own Dockerfile with the following contents:
FROM busybox:latest
CMD ["echo", "hello world"]
Then, in the directory containing that file (named Dockerfile), you could:
$ docker build -t my-hello-world:latest .
$ docker run my-hello-world:latest
The docker build command builds the Dockerfile, which in this case is stored on your machine. The built Docker image is tagged as my-hello-world:latest, and is only available on your machine (where it was built) unless you push it somewhere. You can run the built image from your machine by referring to the tag in the docker run command, as in the second line above.