Are Dockerfiles stored on my machine? - docker

When I run the command:
docker run dockerinaction/hello_world
The first time the following scenario plays out:
The dockerinaction/hello_world Dockerfile can be seen below:
FROM busybox:latest
CMD ["echo", "hello world"]
So from the wording:
Docker searches Docker Hub for the image
There are several things I'm curious about:
Is the image dockerinaction/hello_world?
Does this image reference another image named busybox:latest?
What about the Dockerfile is that on my machine somewhere?

Answers to each bulleted question, in corresponding order:
Yes, the image is dockerinaction/hello_world.
Yes, the image does reference busybox:latest, and builds upon it.
No, the Dockerfile is not stored on your machine. The docker run command is downloading a compressed version of the built Docker image that it found on Docker Hub. In some ways, you can think of the Dockerfile as the source code and the built image as the binary.
If you wanted to, you could write your own Dockerfile with the following contents:
FROM busybox:latest
CMD ["echo", "hello world"]
Then, in the directory containing that file (named Dockerfile), you could:
$ docker build -t my-hello-world:latest .
$ docker run my-hello-world:latest
The docker build command builds the Dockerfile, which in this case is stored on your machine. The built Docker image is tagged as my-hello-world:latest, and is only available on your machine (where it was built) unless you push it somewhere. You can run the built image from your machine by referring to the tag in the docker run command, as in the second line above.

Related

Update Docker image when there are no changes to Dockerfile

Let's say I created a docker image using a command like
docker build -t myimage .
In the current directory where I built the image, I have
ls
Dockerfile
myscript.py
Later, I made changes to ONLY the "myscript.py" file. How do I update the image without needing to rebuild?

How to copy a file from the repository, into the Docker container used for a job, in gitlab-ci.yml

How can I add a file from my project into a Docker using in a gitlab-ci job. Suppose I have below job in my .gitlab-ci.yml .
build:master:
image: ubuntu:latest
script:
- cp sample.txt /sample.txt
stage: build
only:
- master
How to copy a sample.txt inside Ubuntu image? I was thinking as it is already a running container so we can't perform copy command directly but have to run
docker cp sample.txt mycontainerID:/sample.txt
but again how will I get mycontainerID? because it will be running inside a Gitlab runner and any random id will be assigned for every run. Is my assumption is wrong?
The file is already inside the container. If you read carefully through the CI/CD build log, you will see at the very top after pulling the image and starting it, your repository is cloned into the running container.
You can find it under /builds/<organization>/<repository>
(note that these are examples, and you have to adjust to your actual organization and repository name)
Or with the variable $CI_PROJECT_DIR
In fact, that is the directory you are in when starting the job.
For example, this .gitlab-ci.yml:
image: alpine
test:
script:
- echo "the project directory is - $CI_PROJECT_DIR"
- cat $CI_PROJECT_DIR/README.md ; echo
- echo "and where am I? - $PWD"
returns this pipeline output:
As you can see, I could print out the content of the README.md, inside the container.
We do not need to copy. The repository files will be available in the image. GitLab does that for us.
Type to use ls(linux) or dir(windows) command depending your platform to list files and folders.
Your runner is already executing script in your docker container.
What your job does here is:
start a container using Ubuntu image and mount your Git project in
there.
cp sample.txt from Git project's root to container's
stop the container saying "job done"
That's basically what image means: use this docker image to start a container that will execute the commands listed in the script part.
I don't exactly understand what you're trying to achieve. If it's a build job, then why don't you actually COPY your file from your Dockerfile and configure your job to build it with docker build ? A Runner shell executor doing docker build -t your/image:latest -f build/Dockerfile . will do just fine. Then you push this image in some Docker registry (Gitlab's for example, or Docker Hub).
If really your goal is more complex and you want to just add a file to a running container, you can use the same Runner (with a shell executor, not a docker one, so no image) and run something like
- docker run --name YOUR_CONTAINER_NAME -v $PWD:/mnt ubuntu:latest cp /mnt/sample.txt /sample.txt
- docker commit -m "Commit Message" -a "You" YOUR_CONTAINER_NAME your/image:latest
- docker push your/image:latest
- docker rm YOUR_CONTAINER_NAME
Note: I'm not 100% sure the first one would work, but that would be the general idea of creating an image from a container without relying on the actual Dockerfile if really you can't achieve your goal with a Dockerfile.

How to change source code without rebuilding image in Docker?

What is the best practice to use Docker container for dev/prod.
Let's say I want my changes to be applied automatically during development without rebuilding and restarting images. As far as I understand I can inject volume for this when running container.
docker run -v `pwd`/src:/src --rm -it username/node-web-app0
Where pwd/src stands for the directory source code. It's working fine so far.
But how to delivery code to production? I think it worse to keep code along with binaries into the docker container. Do I need to create another similar docker file which will use COPY instead? Or it's better to deploy source-code separately like for dev-mode and mount it to docker.
The best practice is to build a new docker image for every version of your code. That has many advantages in production environments as faster deployments, independence from other systems, easier rollbacks, exportability, etc.
It is possible to do it within the same Dockerfile, using multi-stage builds.
The following is a simple example for a NodeJS app:
FROM node:10 as dev
WORKDIR /src
CMD ["myapp.js"]
FROM node:10
COPY package.json .
RUN npm install
COPY . .
Note that this Dockerfile is only for demo purposes, it can be improved in many ways.
When working on dev environment use the following commands to build the base image and run your code with a mounted folder:
docker build --target dev -t username/node-web-app0 .
docker run -v `pwd`/src:/src --rm -it username/node-web-app0
And when you're ready for production, just exec docker run without the --target argument to build the full image, that contains the code:
docker build -t username/node-web-app0:v0.1 .
docker push username/node-web-app0:v0.1

Docker build using volumes at build time

Is there a way to use external volumes during the docker image build ?
I have a situation where I would like to use a configuration inside a external volume during the docker image build time. Is that possible?
(edited to reflect current Docker CLI behavior)
If by 'docker image build' you mean running a single 'docker build ...' command: no, there is no way to do that (at least, not in the most recent documentation that I have read). However, nothing prevents you from performing the step that needs the external volume using direct docker commands and then commit the container and tag it just as 'docker build' would. Assuming this is the last step in your build, put all other commands (that don't need the volume) into a Dockerfile and then do this:
tmp_img=`docker build .`
tmp_container=`docker run -d -v $my_ext_volume:$my_mount_path --entrypoint=(your volume-dependent build command here) $tmp_img`
docker wait "$tmp_container"
docker commit $tmp_container my_repo/image_tag:latest
docker rm "$tmp_container"
This does the same as having a RUN command in the Dockerfile, but with the added volume mount. The commit command in the example also tags the image.
It is a bit more complex if you need to have other Dockerfile commands after the volume-dependent one, but in most cases you can combine run commands and re-arrange your install in a way that leaves the manual run-with-volume command last, to keep things simple.
You can copy the file into the docker image (ADD) and rm as one of the last steps
podman is an alternative to Docker that has an api that is the same BUT also supports mounting volumes at buildtime.
I use this to load data into testdatabases without having to copy the data into the image first.
You can use ADD combined with ARG (build time parameters) to access files or directories during the build without having to hardcode their location.
ARG MAVEN_SETTINGS=settings.xml
ADD $MAVEN_SETTINGS ./
And now you can change the file location during the build with:
docker build --build-arg MAVEN_SETTINGS=someotherfile.xml
We are not restricted to Docker to build OCI images.
With buildah it's possible to mount volumes from the host that won't be persisted in the final image. Useful for configuration and secrets.
buildah bud --volume /home/test:/myvol:ro -t imageName .

Should files from a base Docker image be present in a derived image?

I'm creating a Dockerfile that uses a base image: dockerfile/rabbitmq.
In the Dockerfile for rabbitmq there's a line to install a script into the image:
ADD bin/rabbitmq-start /usr/local/bin/
In my Dockerfile I don't have this line. I have my own ADD lines.
When I run the image all the rabbitmq binaries and config are there, along with my stuff, but there's no rabbitmq-start script anywhere.
Why isn't it present in my image? (If I run the base image dockerfile/rabbitmq the file is there, of course.) Are ADD's not "inherited" to derived images?
Seems to work for me:
I cloned dockerfile/ubuntu and built that locally,
I cloned dockerfile/rabbitmq and built that locally,
I cloned your repository and built that locally.
Booting a shell in your image:
docker run -it --rm gzoller/world bash
I see both the rabbitmq-start script added by the rabbitmq image as well as the start script installed in your image:
[ root#d0044b91278e:/data ]$ ls /usr/local/bin/
rabbitmq-start start

Resources