Docker only recognizes relative directories, not absolute - docker

In my Dockerfile, I'm trying to copy a directory from the host into the image. What I noticed if I use a relative directory, like this:
COPY ./foo/bar /
This works fine.
This does not:
COPY /foo/bar /
The error I get back is confusing when I try to build the image, too:
lstat /foo/bar: no such file or directory
...because if I just do an ls /foo/bar on the host, it's there.
Is there some syntax I have wrong, or something else?

Another user answered in a comment. Read the docker build documentation
The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.
EDIT: As the The Fool pointed out in his comment in regards to context.
Think of context as the root directory and docker cannot get out of.
In layman's terms: you can only perform the COPY command with paths relative to the context you set.
Say you have this directory structure:
<Dockerfile path>/
Dockerfile
somefile.txt
<your context directory>/
config/configfile
data/datafile
And you use this docker build command:
docker build -f <Dockerfile path>/Dockerfile /<your context directory>
In your docker file you can specify paths relative to the directory, but cannot access anything above the context directory. somefile.txt is not accessible from the Dockerfile.

Related

Copy folders from one directory to another within Docker container

Can anyone tell me how to move a folder from one Directory to another within a Docker container? I have a folder in the root directory of my Docker container:
root/folder/folder1
I've created a folder called Source at the same container level as root. In my Dockerfile I'm trying to copy folder1 into Source as below:
ADD root/folder/folder1/ /Source/
But I get an error saying that root/folder/folder1/ isn't a file or directory. I'm new to Docker, can anyone assist?
The ADD instruction copy from outside the image into the image. The source should be in your contexte directory or, for ADD, an URL and Docker will download it (and extract it if it's an archive).
It's usually a good practice to use COPY instead of ADD most of the time.
In your case, as you want to copy a directory inside your docker image, you should execute a shell command for that: RUN cp -r /root/folder/folder1 /Source (or maybe create a link if you don't need to duplicate the content).
For more information about ADD vs COPY, you can read the Dockerfile Best Practices from docker

Copy file into Dockerfile from different directory

Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .
The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.
No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.
When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.

cp: cannot stat 'Pipfile': No such file or directory in Docker build

I have a project directory setup like this:
docker and Backend are siblings
I go into my docker folder and type docker build ./
Theoretically, based on the relevant part of my Dockerfile, I should go to the parent directory and into Backend
ARG PROJECT_DIR=../Backend
WORKDIR $PROJECT_DIR
RUN cp Pipfile Pipfile.lock
However I get the error:
cp: cannot stat 'Pipfile': No such file or directory
I have also tried COPY and it does not work either.
Why can't I copy my Pipfile to Pipfile.lock? It exists in the Backend directory.
You need to understand that two things:
A docker build actually instantiate a container using the base image (FROM) do the changes specified on your Dockerfile and create a new image from the final state of this container.
A docker container have a isolated filesystem. You cannot access external files from it unless you configure a volume (and volumes are not available on Dockerfile/docker build process
So, to put files inside your docker image, you use the COPY command. The first argument is the original file path, from the directory of your Dockerfile. Files outside is inaccessible for security reasons.
If you need to access a file outside the Dockerfile directory, you probably placed it in the wrong place.
Docker is a client (docker) <> server (dockerd, running as a daemon) application. When you type docker build... the client compresses the context (specified in your example as ./) and sends that to the server (which in most cases is running on the same machine, but that's really irrelevant here). That means anything that's not in the context is unavailable to the build process (which is done by the server).
So now you can see why things in a 'sibling' directory are not available, they aren't part of the context, and aren't sent to the daemon for the build.
Reference more info here: https://docs.docker.com/engine/reference/commandline/build/#extended-description

Docker build with custom path and custom filename not working

I am using Docker version 19.03.2 in Windows 10.
I have a directory C:\docker_test with files Dockerfile and Dockerfile_test. When I execute
PS C:\> docker build 'C:\docker_test'
in PowerShell, the image is built from Dockerfile, just as expected. However, when I want to build an image from Dockerfile_test like this
PS C:\> docker build -f Dockerfile_test 'C:\docker_test'
I get
unable to prepare context: unable to evaluate symlinks in Dockerfile path:
CreateFile C:\Dockerfile_test: The system cannot find the file specified.
I don't understand why Docker is looking for C:\Dockerfile_test although I specified a build path.
You should state the path (the context) of your Dockerfile, like so
PS C:\> docker build -f 'C:\docker_test\Dockerfile_test' 'C:\docker_test'
There is already an answer to the question but this is to detail a bit
From the docs
The docker build command builds an image from a Dockerfile and a
context. The build’s context is the set of files at a specified
location PATH or URL
With C:\docker_test you speficied the context.
From the same docs
Traditionally, the Dockerfile is called Dockerfile and located in the
root of the context. You use the -f flag with docker build to point to a Dockerfile anywhere in your file system.
Therefore if you specify the -f flag docker will search for the given file. In your case you have a file without path therefore docker will search in the current directory.
To make it work use the command as suggested by #Arik

Why to use -f in docker build command

I follow this K8s tutorial, and in the middle of the file, there is the following instruction:
12. Now let’s build an image, giving it a special name that points to our local cluster registry.
$docker build -t 127.0.0.1:30400/hello-kenzan:latest -f applications/hello-kenzan/Dockerfile applications/hello-kenzan
I don't understand why do you need to point to the dockerfile using -f applications/hello-kenzan/Dockerfile.
In the man of docker build:
-f, --file=PATH/Dockerfile
Path to the Dockerfile to use. If the path is a relative path and you are
building from a local directory, then the path must be relative to that
directory. If you are building from a remote URL pointing to either a
tarball or a Git repository, then the path must be relative to the root of
the remote context. In all cases, the file must be within the build context.
The default is Dockerfile.
So -f is to point to the dockerfile, but we already gave the path of the dockerfile in the end of build command - docker build ...applications/hello-kenzan, so why do you need to write it twice? am I missing something?
The reason for this is because he probably had multiple files called Dockerfile and using -f tells the docker to NOT use the Dockerfile in the current directory (when there's one) but use the Dockerfile in the applications/hello-kenzan instead.
While in THIS PARTICULAR example it was unnecessary to do this, I appreciate the tutorial creator to show you an option to use PATH and point the -f at specific place to show you (the person who wants to learn) that it is possible to use Dockerfiles that are not in PATH (i.e when you have multiple dockerfiles you want to create your builds with or when the file is not named Dockerfile but e.g myapp-dockerfile)
You are right. In this case you don't have to use the -f option. The official docs say:
-f: Name of the Dockerfile (Default is ‘PATH/Dockerfile’) and as the given PATH is applications/hello-kenzan the Dockerfile there will be found implicitly.

Resources