Copy file into Dockerfile from different directory - docker

Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .

The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.

No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.

When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.

Related

Docker only recognizes relative directories, not absolute

In my Dockerfile, I'm trying to copy a directory from the host into the image. What I noticed if I use a relative directory, like this:
COPY ./foo/bar /
This works fine.
This does not:
COPY /foo/bar /
The error I get back is confusing when I try to build the image, too:
lstat /foo/bar: no such file or directory
...because if I just do an ls /foo/bar on the host, it's there.
Is there some syntax I have wrong, or something else?
Another user answered in a comment. Read the docker build documentation
The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.
EDIT: As the The Fool pointed out in his comment in regards to context.
Think of context as the root directory and docker cannot get out of.
In layman's terms: you can only perform the COPY command with paths relative to the context you set.
Say you have this directory structure:
<Dockerfile path>/
Dockerfile
somefile.txt
<your context directory>/
config/configfile
data/datafile
And you use this docker build command:
docker build -f <Dockerfile path>/Dockerfile /<your context directory>
In your docker file you can specify paths relative to the directory, but cannot access anything above the context directory. somefile.txt is not accessible from the Dockerfile.

How to include files outside of build context without specifying a different Dockerfile path?

This is basically a follow-up question to How to include files outside of Docker's build context?: I'm using large files in all of my projects (several GBs) which I keep on an external drive, only used for development.
I want to COPY or ADD these files to my docker container when building it. The answer linked above allows one to specify a different path to a Dockerfile, potentially extending the build context. I find this unpractical, since this would require setting the build context to system root (?), to be able to include a single file.
Long story short: Is there any way or workaround to include a file that is far removed from the docker build context?
Three suggestions on things you could try:
include a file that is far removed from the docker build context?
You could construct your own build context by cp (or tar) files on the host into a dedicated directory tree. You don't have to use the actual source tree or your build tree.
rm -rf docker-build
mkdir docker-build
cp -a Dockerfile build/the-binary docker-build
cp -a /mnt/external/support docker-build
docker build ./docker-build
# reads docker-build/Dockerfile, and the files in the
# docker-build directory, but nothing else; only sends
# the docker-build directory to Docker as the build context
large files [...] (several GBs)
Docker doesn't deal well with build contexts this large. In the past I've at least seen docker build take a long time just on the step of sending the build context to itself, and docker push and docker pull have network issues when trying to send the gigabyte+ layer around.
It's a little hacky and breaks the "self-contained image" model a little bit, but you can provide these files as a Docker bind-mount instead of including them in the image. Your application needs to know what to do if the data isn't there. When you go to deploy the application, you also need to separately distribute the files alongside the Docker image and other deployment artifacts.
docker run \
-v /mnt/external/support:/app/support
...
the-image-without-the-support-files
only used for development
Potentially you can get away with not using Docker at all during this phase of development. Use a local source tree and local development tools; run your unit tests against these large test fixtures as needed. Build a Docker image only when you're about to run pre-commit integration tests; that may be late enough in the development cycle that you don't need these files.
I think the main thing you are worried about is that you do not want to send all files of a directory to docker daemon while it builds the image.
When directory was so big (in GBss) it takes lot of time to build an image.
If the requirement is to just use those files while you build anything inside docker, you can mount those to the container.
A tricky way
Run a container with base image and mount the direcotries inside it. docker run -d -v local-path:container-path
Get inside the container docker exec -it CONTAINER_ID bash
Run build step ./build-something.sh
Create image from the running container docker commit CONTAINER_ID
Tag the image docker tag IMAGE_ID tag:v1. You can get Image ID from previous command
From long term perspective this method may seem to be very tedious, but if you want to build image for 1 or 2 times , you can try this method.
I tried this for one of my docker image, as I want to avoid large amount of files sent to docker daemon during image build
The copy command gets source and destination values,
just specify full absolute path to your hard drive mount point as the src directory
COPY /absolute_path/to/harddrive /container/path

cp: cannot stat 'Pipfile': No such file or directory in Docker build

I have a project directory setup like this:
docker and Backend are siblings
I go into my docker folder and type docker build ./
Theoretically, based on the relevant part of my Dockerfile, I should go to the parent directory and into Backend
ARG PROJECT_DIR=../Backend
WORKDIR $PROJECT_DIR
RUN cp Pipfile Pipfile.lock
However I get the error:
cp: cannot stat 'Pipfile': No such file or directory
I have also tried COPY and it does not work either.
Why can't I copy my Pipfile to Pipfile.lock? It exists in the Backend directory.
You need to understand that two things:
A docker build actually instantiate a container using the base image (FROM) do the changes specified on your Dockerfile and create a new image from the final state of this container.
A docker container have a isolated filesystem. You cannot access external files from it unless you configure a volume (and volumes are not available on Dockerfile/docker build process
So, to put files inside your docker image, you use the COPY command. The first argument is the original file path, from the directory of your Dockerfile. Files outside is inaccessible for security reasons.
If you need to access a file outside the Dockerfile directory, you probably placed it in the wrong place.
Docker is a client (docker) <> server (dockerd, running as a daemon) application. When you type docker build... the client compresses the context (specified in your example as ./) and sends that to the server (which in most cases is running on the same machine, but that's really irrelevant here). That means anything that's not in the context is unavailable to the build process (which is done by the server).
So now you can see why things in a 'sibling' directory are not available, they aren't part of the context, and aren't sent to the daemon for the build.
Reference more info here: https://docs.docker.com/engine/reference/commandline/build/#extended-description

Why to use -f in docker build command

I follow this K8s tutorial, and in the middle of the file, there is the following instruction:
12. Now let’s build an image, giving it a special name that points to our local cluster registry.
$docker build -t 127.0.0.1:30400/hello-kenzan:latest -f applications/hello-kenzan/Dockerfile applications/hello-kenzan
I don't understand why do you need to point to the dockerfile using -f applications/hello-kenzan/Dockerfile.
In the man of docker build:
-f, --file=PATH/Dockerfile
Path to the Dockerfile to use. If the path is a relative path and you are
building from a local directory, then the path must be relative to that
directory. If you are building from a remote URL pointing to either a
tarball or a Git repository, then the path must be relative to the root of
the remote context. In all cases, the file must be within the build context.
The default is Dockerfile.
So -f is to point to the dockerfile, but we already gave the path of the dockerfile in the end of build command - docker build ...applications/hello-kenzan, so why do you need to write it twice? am I missing something?
The reason for this is because he probably had multiple files called Dockerfile and using -f tells the docker to NOT use the Dockerfile in the current directory (when there's one) but use the Dockerfile in the applications/hello-kenzan instead.
While in THIS PARTICULAR example it was unnecessary to do this, I appreciate the tutorial creator to show you an option to use PATH and point the -f at specific place to show you (the person who wants to learn) that it is possible to use Dockerfiles that are not in PATH (i.e when you have multiple dockerfiles you want to create your builds with or when the file is not named Dockerfile but e.g myapp-dockerfile)
You are right. In this case you don't have to use the -f option. The official docs say:
-f: Name of the Dockerfile (Default is ‘PATH/Dockerfile’) and as the given PATH is applications/hello-kenzan the Dockerfile there will be found implicitly.

How to delete files sent to Docker daemon build context

I ran this command in my home directory:
docker build .
and it sent 20 GB files to the Docker daemon before I knew what was happening. I have no space left on my laptop. How do I delete the files that were replicated? I can't locate them.
What happens when you run docker build . command:
Docker client looks for a file named Dockerfile at the same directory where your command runs. If that file doesn't exists, an error is thrown.
Docker client looks a file named .dockerignore. If that file exists, Docker client uses that in next step. If not exists nothing happens.
Docker client makes a tar package called build context. Default, it includes everything in the same directory with Dockerfile. If there are some ignore rules in .dockerignore file, Docker client excludes those files specified in the ignore rules.
Docker client sends the build context to Docker engine which named as Docker daemon or Docker server.
Docker engine gets the build context on the fly and starts building the image, step by step defined in the Dockerfile.
After the image building is done, the build context is released.
So, your build context is not replicated anywhere but in the image you just created if only it needs all the build context. You can check image sizes by running this: docker images. If you see some unused or unnecessary images, use docker rmi unusedImageName.
If your image does'nt need everything in the build context, I suggest you to use .dockerignore rules, to reduce build context size. Exclude everything which are not necessary for the image. This way, the building process will be shorter and you will see if there is any misconfigured COPY or ADD steps in the Dockerfile.
For example, I use something like this:
# .dockerignore
* # exclude everything
!build/libs/*.jar # include just what I need in the image
https://docs.docker.com/engine/reference/builder/#dockerignore-file
https://docs.docker.com/engine/docker-overview/
Likely the space is being used by the resulting image. Locate and delete it:
docker images
Search there by size column.
Then delete it:
docker rmi <image-id>
Also you can delete everything docker-related:
docker system prune -a
In case of stopping the building context for some reason, you can go as well to /var/lib/docker/tmp/, with root access, and then erase the tmp files of docker builder. In this situation, the building context doesn't build correctly, so the part that it did build, was saved in a tmp file in /var/lib/docker/tmp/

Resources