I cant call my Dockerfiles something else - docker

My code solution contains a shared project, web frontend and web api project. Due to Dockerfile can't access parent folders the Dockerfiles need to be at the root folder i figure (frontend and api both depend on shared).
So I have Dockerfile.api which is allowed if I have version 1.8 or greater (https://github.com/moby/moby/wiki). I do, I have the latest 4.2
Calling Dockerfile (web api) is successful, but calling docker build -f Dockerfile.api results in this error:
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
What am I missing here? As my repo contains different projects that will result in different images I want different Dockerfiles, and they need to be at the root due to dependencies.

docker build need to have a path argument where the build should be executed.
So the correct command will be:
docker build -f Dockerfile.api .

Related

Docker build command is throwing error in jenkins

When i am running the jenkins pipline then "docker build -t " command written in jenkinsfile is giving me below the error.enter image description here
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /var/lib/snapd/void/Dockerfile: no such file or directory
docker build [OPTIONS] PATH | URL | -
in your case OPTIONS - -t <tag>(did you want add tag?)
PATH - folder with context which be passed to build process, must be exist
commonly you enter to directory to your folder with context and write something like:
docker build ./
it means that docker get current dir and pass as context
and Dockerfile must be exist in current folder
but you can pass in [OPTIONS] -f /path/to/Dockerfile
For dockerfiles
Some information on the purpose of a dockerfile,
Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.
More information with the arguments and how to use them in the Docker documentation,
https://docs.docker.com/engine/reference/builder/
For Docker build
Description
Build an image from a Dockerfile
Usage
docker build [OPTIONS] PATH | URL | -
Extended description
The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.
The URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball contexts and plain text files.
More information from the Docker documentation,
https://docs.docker.com/engine/reference/commandline/build/
If you docker build successfully build after, but Jenkins stil reports the same error as before then,
you need to check the filepaths for /var/lib/snapd/void/Dockerfile on the Jenkins server running the job. In addition the jenkins build error, which reports the location, /var/lib/jenkins/workspace/docExp for the symlinks and the permissions which needs to be checked, for you to not receive any errors.

Copy file into Dockerfile from different directory

Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .
The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.
No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.
When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.

Docker build with custom path and custom filename not working

I am using Docker version 19.03.2 in Windows 10.
I have a directory C:\docker_test with files Dockerfile and Dockerfile_test. When I execute
PS C:\> docker build 'C:\docker_test'
in PowerShell, the image is built from Dockerfile, just as expected. However, when I want to build an image from Dockerfile_test like this
PS C:\> docker build -f Dockerfile_test 'C:\docker_test'
I get
unable to prepare context: unable to evaluate symlinks in Dockerfile path:
CreateFile C:\Dockerfile_test: The system cannot find the file specified.
I don't understand why Docker is looking for C:\Dockerfile_test although I specified a build path.
You should state the path (the context) of your Dockerfile, like so
PS C:\> docker build -f 'C:\docker_test\Dockerfile_test' 'C:\docker_test'
There is already an answer to the question but this is to detail a bit
From the docs
The docker build command builds an image from a Dockerfile and a
context. The build’s context is the set of files at a specified
location PATH or URL
With C:\docker_test you speficied the context.
From the same docs
Traditionally, the Dockerfile is called Dockerfile and located in the
root of the context. You use the -f flag with docker build to point to a Dockerfile anywhere in your file system.
Therefore if you specify the -f flag docker will search for the given file. In your case you have a file without path therefore docker will search in the current directory.
To make it work use the command as suggested by #Arik

Why to use -f in docker build command

I follow this K8s tutorial, and in the middle of the file, there is the following instruction:
12. Now let’s build an image, giving it a special name that points to our local cluster registry.
$docker build -t 127.0.0.1:30400/hello-kenzan:latest -f applications/hello-kenzan/Dockerfile applications/hello-kenzan
I don't understand why do you need to point to the dockerfile using -f applications/hello-kenzan/Dockerfile.
In the man of docker build:
-f, --file=PATH/Dockerfile
Path to the Dockerfile to use. If the path is a relative path and you are
building from a local directory, then the path must be relative to that
directory. If you are building from a remote URL pointing to either a
tarball or a Git repository, then the path must be relative to the root of
the remote context. In all cases, the file must be within the build context.
The default is Dockerfile.
So -f is to point to the dockerfile, but we already gave the path of the dockerfile in the end of build command - docker build ...applications/hello-kenzan, so why do you need to write it twice? am I missing something?
The reason for this is because he probably had multiple files called Dockerfile and using -f tells the docker to NOT use the Dockerfile in the current directory (when there's one) but use the Dockerfile in the applications/hello-kenzan instead.
While in THIS PARTICULAR example it was unnecessary to do this, I appreciate the tutorial creator to show you an option to use PATH and point the -f at specific place to show you (the person who wants to learn) that it is possible to use Dockerfiles that are not in PATH (i.e when you have multiple dockerfiles you want to create your builds with or when the file is not named Dockerfile but e.g myapp-dockerfile)
You are right. In this case you don't have to use the -f option. The official docs say:
-f: Name of the Dockerfile (Default is ‘PATH/Dockerfile’) and as the given PATH is applications/hello-kenzan the Dockerfile there will be found implicitly.

Use console output in Dockerfile

I want to use some console output as the name of my base docker image.
Specifically, I have a chain of dependent docker build files so I am trying to automate this process. So for instance, the Dockerfile of one image derived1 depends on the base image base_image_namein the following scenario:
base_image_name/
Dockerfile
derived1/
Dockerfile
derived2/
Dockerfile
When the base image builds, it grabs its name from its current folder by using ${PWD##*/}. In this case, the base image's folder is called base_image_name, and so the base image is called company:base_image_name.
Then when the derived images build, they should just be able to figure out the base image's name by moving up a directory and looking at that directories name. So for instance, when build the company:derived1 image builds, it should look up one directory, see that it is called base_image_name, and from that infer that it should use the base image company:base_image_name.
I would like to have this structure several layers deep, so I want to automate it. To do that, I have tried several permutations of the syntax
FROM company:$(cd $PWD/../; echo ${PWD##*/})
but I can't seem to get it right. To understand what the command $(cd $PWD/../; echo ${PWD##*/}) is doing, just type it into your terminal..
echo $(cd $PWD/../; echo ${PWD##*/})
simply returns the name of the directory one level up. However, when I try to use this in a Dockerfile, I get the error
Error response from daemon: Dockerfile parse error line 1: FROM requires either one or three arguments
Could somebody please provide me with the correct syntax?
EDIT:
I also tried building the derived images with a build-arg, but that doesn't seem to work either:
build.sh:
BASE=$(cd $PWD/../../; echo ${PWD##*/})
echo "BASE="$BASE
docker build --build-arg BASE=${BASE} -t company:"${PWD##*/}" .
where the Dockerfile looks like
FROM company:$BASE
Specifically, this yields the build error:
BASE=base_image_name
Sending build context to Docker daemon 5.12kB
Step 1/3 : FROM company:$BASE
invalid reference format
So it seems that docker is not interpretting that build arg correctly.
Dockerfiles don't support shell syntax in general, except for some very limited environment variable expansion.
They do support ARGs that can be passed in from the command line, and an ARG can be used to define the image FROM. So you could start your Dockerfile with
ARG tag
FROM company:${tag:-latest}
and then build the image with
docker build --build-arg tag=$(cd $PWD/../; echo ${PWD##*/}) .
(which is involved enough that you might want to write it into a shell script).
At a very low level, it's also worth remembering that docker build works by making a tar file of the current directory, sending it across an HTTP connection to the Docker daemon, and running the build there. Once that process has happened, any notion of the host directory name is lost. In other words, even if the syntax worked, docker build also doesn't have the capability to know which host directory the Dockerfile is in.
Aha. Found it.
As Jonathon points out, it seems as though you can't easily pull stuff in from your environment into the build system. It seems that you must use Docker build-args.
The solution was to evaluate the variable in the terminal and pass that as a build-arg:
build.sh:
BASE=$(cd $PWD/../; echo ${PWD##*/})
echo "BASE="$BASE
docker build --build-arg BASE=${BASE} -t company:"${PWD##*/}" .
Then inside the Dockerfile of the derived image:
ARG BASE
FROM company:$BASE
You're trying to use bash command substitution in something that isn't consumed by bash.
The [Dockerfile reference[(https://docs.docker.com/engine/reference/builder/) indicates that environment variable substitution is supported by the FROM instruction.
You'll need to instead simply use an environment variable in FROM that you compute outside of the Dockerfile and pass to docker build.

Resources