Docker build command is throwing error in jenkins - docker

When i am running the jenkins pipline then "docker build -t " command written in jenkinsfile is giving me below the error.enter image description here
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /var/lib/snapd/void/Dockerfile: no such file or directory

docker build [OPTIONS] PATH | URL | -
in your case OPTIONS - -t <tag>(did you want add tag?)
PATH - folder with context which be passed to build process, must be exist
commonly you enter to directory to your folder with context and write something like:
docker build ./
it means that docker get current dir and pass as context
and Dockerfile must be exist in current folder
but you can pass in [OPTIONS] -f /path/to/Dockerfile

For dockerfiles
Some information on the purpose of a dockerfile,
Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.
More information with the arguments and how to use them in the Docker documentation,
https://docs.docker.com/engine/reference/builder/
For Docker build
Description
Build an image from a Dockerfile
Usage
docker build [OPTIONS] PATH | URL | -
Extended description
The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.
The URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball contexts and plain text files.
More information from the Docker documentation,
https://docs.docker.com/engine/reference/commandline/build/
If you docker build successfully build after, but Jenkins stil reports the same error as before then,
you need to check the filepaths for /var/lib/snapd/void/Dockerfile on the Jenkins server running the job. In addition the jenkins build error, which reports the location, /var/lib/jenkins/workspace/docExp for the symlinks and the permissions which needs to be checked, for you to not receive any errors.

Related

I cant call my Dockerfiles something else

My code solution contains a shared project, web frontend and web api project. Due to Dockerfile can't access parent folders the Dockerfiles need to be at the root folder i figure (frontend and api both depend on shared).
So I have Dockerfile.api which is allowed if I have version 1.8 or greater (https://github.com/moby/moby/wiki). I do, I have the latest 4.2
Calling Dockerfile (web api) is successful, but calling docker build -f Dockerfile.api results in this error:
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
What am I missing here? As my repo contains different projects that will result in different images I want different Dockerfiles, and they need to be at the root due to dependencies.
docker build need to have a path argument where the build should be executed.
So the correct command will be:
docker build -f Dockerfile.api .

Copy file into Dockerfile from different directory

Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .
The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.
No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.
When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.

Docker build with custom path and custom filename not working

I am using Docker version 19.03.2 in Windows 10.
I have a directory C:\docker_test with files Dockerfile and Dockerfile_test. When I execute
PS C:\> docker build 'C:\docker_test'
in PowerShell, the image is built from Dockerfile, just as expected. However, when I want to build an image from Dockerfile_test like this
PS C:\> docker build -f Dockerfile_test 'C:\docker_test'
I get
unable to prepare context: unable to evaluate symlinks in Dockerfile path:
CreateFile C:\Dockerfile_test: The system cannot find the file specified.
I don't understand why Docker is looking for C:\Dockerfile_test although I specified a build path.
You should state the path (the context) of your Dockerfile, like so
PS C:\> docker build -f 'C:\docker_test\Dockerfile_test' 'C:\docker_test'
There is already an answer to the question but this is to detail a bit
From the docs
The docker build command builds an image from a Dockerfile and a
context. The build’s context is the set of files at a specified
location PATH or URL
With C:\docker_test you speficied the context.
From the same docs
Traditionally, the Dockerfile is called Dockerfile and located in the
root of the context. You use the -f flag with docker build to point to a Dockerfile anywhere in your file system.
Therefore if you specify the -f flag docker will search for the given file. In your case you have a file without path therefore docker will search in the current directory.
To make it work use the command as suggested by #Arik

Why to use -f in docker build command

I follow this K8s tutorial, and in the middle of the file, there is the following instruction:
12. Now let’s build an image, giving it a special name that points to our local cluster registry.
$docker build -t 127.0.0.1:30400/hello-kenzan:latest -f applications/hello-kenzan/Dockerfile applications/hello-kenzan
I don't understand why do you need to point to the dockerfile using -f applications/hello-kenzan/Dockerfile.
In the man of docker build:
-f, --file=PATH/Dockerfile
Path to the Dockerfile to use. If the path is a relative path and you are
building from a local directory, then the path must be relative to that
directory. If you are building from a remote URL pointing to either a
tarball or a Git repository, then the path must be relative to the root of
the remote context. In all cases, the file must be within the build context.
The default is Dockerfile.
So -f is to point to the dockerfile, but we already gave the path of the dockerfile in the end of build command - docker build ...applications/hello-kenzan, so why do you need to write it twice? am I missing something?
The reason for this is because he probably had multiple files called Dockerfile and using -f tells the docker to NOT use the Dockerfile in the current directory (when there's one) but use the Dockerfile in the applications/hello-kenzan instead.
While in THIS PARTICULAR example it was unnecessary to do this, I appreciate the tutorial creator to show you an option to use PATH and point the -f at specific place to show you (the person who wants to learn) that it is possible to use Dockerfiles that are not in PATH (i.e when you have multiple dockerfiles you want to create your builds with or when the file is not named Dockerfile but e.g myapp-dockerfile)
You are right. In this case you don't have to use the -f option. The official docs say:
-f: Name of the Dockerfile (Default is ‘PATH/Dockerfile’) and as the given PATH is applications/hello-kenzan the Dockerfile there will be found implicitly.

How to skip sending of build context without using .dockerignore?

Without using a .dockerignore file, is there a way to skip sending of build context when building an image via the following command?
docker build .
In other words, I would like the build context to be empty, without a need to manually create an empty directory that I would then pass to docker build.
You can run
docker build - < Dockerfile
From the official documentation:
This will read a Dockerfile from STDIN without context. Due to the lack of a context, no contents of any local directory will be sent to the Docker daemon.

Resources