Docker build with custom path and custom filename not working - docker

I am using Docker version 19.03.2 in Windows 10.
I have a directory C:\docker_test with files Dockerfile and Dockerfile_test. When I execute
PS C:\> docker build 'C:\docker_test'
in PowerShell, the image is built from Dockerfile, just as expected. However, when I want to build an image from Dockerfile_test like this
PS C:\> docker build -f Dockerfile_test 'C:\docker_test'
I get
unable to prepare context: unable to evaluate symlinks in Dockerfile path:
CreateFile C:\Dockerfile_test: The system cannot find the file specified.
I don't understand why Docker is looking for C:\Dockerfile_test although I specified a build path.

You should state the path (the context) of your Dockerfile, like so
PS C:\> docker build -f 'C:\docker_test\Dockerfile_test' 'C:\docker_test'

There is already an answer to the question but this is to detail a bit
From the docs
The docker build command builds an image from a Dockerfile and a
context. The build’s context is the set of files at a specified
location PATH or URL
With C:\docker_test you speficied the context.
From the same docs
Traditionally, the Dockerfile is called Dockerfile and located in the
root of the context. You use the -f flag with docker build to point to a Dockerfile anywhere in your file system.
Therefore if you specify the -f flag docker will search for the given file. In your case you have a file without path therefore docker will search in the current directory.
To make it work use the command as suggested by #Arik

Related

specify the docker file name with `-f Dockerfile` yields error when building image [duplicate]

This question already has answers here:
"docker build" requires exactly 1 argument(s)
(4 answers)
Closed 1 year ago.
I have a Dockerfile (There is no problem with this Dockerfile itself, it is provided by a 3rd party).
The directory structure:
webapp/
- Dockfile
- app.py
I can build the image by: docker build . It successfully built a image with no image name nor tag:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> ee49229fe570 44 seconds ago 913MB
I would like to build the image with a name (repository) called "webapp-color" without tag. So, I re-build the image by docker build -f Dockerfile -t webapp-color, but this command gives error:
$ docker build -f Dockerfile -t webapp-color
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Then, I changed a little bit the command, by executing docker build . -t webapp-color, now it successed.
Why I can't specify -f Dockerfile? I understand it is the default docker file name docker looks for but I don't understand why I can't explictly specify it.
I checked my docker version:
Docker version 19.03.15
The error was not about having a -f Dockerfile, but rather about not having that PATH (.) to the context at the end. See here for more details.
This example specifies that the PATH is ., and so all the files in the local directory get tard and sent to the Docker daemon. The PATH specifies where to find the files for the “context” of the build on the Docker daemon. Remember that the daemon could be running on a remote machine and that no parsing of the Dockerfile happens at the client side (where you’re running docker build). That means that all the files at PATH get sent, not just the ones listed to ADD in the Dockerfile.

Docker build command is throwing error in jenkins

When i am running the jenkins pipline then "docker build -t " command written in jenkinsfile is giving me below the error.enter image description here
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /var/lib/snapd/void/Dockerfile: no such file or directory
docker build [OPTIONS] PATH | URL | -
in your case OPTIONS - -t <tag>(did you want add tag?)
PATH - folder with context which be passed to build process, must be exist
commonly you enter to directory to your folder with context and write something like:
docker build ./
it means that docker get current dir and pass as context
and Dockerfile must be exist in current folder
but you can pass in [OPTIONS] -f /path/to/Dockerfile
For dockerfiles
Some information on the purpose of a dockerfile,
Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.
More information with the arguments and how to use them in the Docker documentation,
https://docs.docker.com/engine/reference/builder/
For Docker build
Description
Build an image from a Dockerfile
Usage
docker build [OPTIONS] PATH | URL | -
Extended description
The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.
The URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball contexts and plain text files.
More information from the Docker documentation,
https://docs.docker.com/engine/reference/commandline/build/
If you docker build successfully build after, but Jenkins stil reports the same error as before then,
you need to check the filepaths for /var/lib/snapd/void/Dockerfile on the Jenkins server running the job. In addition the jenkins build error, which reports the location, /var/lib/jenkins/workspace/docExp for the symlinks and the permissions which needs to be checked, for you to not receive any errors.

Copy file into Dockerfile from different directory

Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .
The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.
No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.
When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.

Why to use -f in docker build command

I follow this K8s tutorial, and in the middle of the file, there is the following instruction:
12. Now let’s build an image, giving it a special name that points to our local cluster registry.
$docker build -t 127.0.0.1:30400/hello-kenzan:latest -f applications/hello-kenzan/Dockerfile applications/hello-kenzan
I don't understand why do you need to point to the dockerfile using -f applications/hello-kenzan/Dockerfile.
In the man of docker build:
-f, --file=PATH/Dockerfile
Path to the Dockerfile to use. If the path is a relative path and you are
building from a local directory, then the path must be relative to that
directory. If you are building from a remote URL pointing to either a
tarball or a Git repository, then the path must be relative to the root of
the remote context. In all cases, the file must be within the build context.
The default is Dockerfile.
So -f is to point to the dockerfile, but we already gave the path of the dockerfile in the end of build command - docker build ...applications/hello-kenzan, so why do you need to write it twice? am I missing something?
The reason for this is because he probably had multiple files called Dockerfile and using -f tells the docker to NOT use the Dockerfile in the current directory (when there's one) but use the Dockerfile in the applications/hello-kenzan instead.
While in THIS PARTICULAR example it was unnecessary to do this, I appreciate the tutorial creator to show you an option to use PATH and point the -f at specific place to show you (the person who wants to learn) that it is possible to use Dockerfiles that are not in PATH (i.e when you have multiple dockerfiles you want to create your builds with or when the file is not named Dockerfile but e.g myapp-dockerfile)
You are right. In this case you don't have to use the -f option. The official docs say:
-f: Name of the Dockerfile (Default is ‘PATH/Dockerfile’) and as the given PATH is applications/hello-kenzan the Dockerfile there will be found implicitly.

How to execute Linux command in host-machine and copy the output to image build by docker file?

I want to copy the my.cnf file present in the host server to child docker image wherever I run docker file that uses a custom base image having below command.
ONBUILD ADD locate -i my.cnf|grep -ioh 'my.cnf'|head -1 /
but above line is breaking docker file. Please share correct syntax or alternatives to achieve the same.
Unfortunately, you can't declare host commands inside your Dockerfile. See Execute command on host during docker build
.
Your best bet is probably to tweak your deployment scripts to locate my.cnf on the host before running docker build.

Resources