I'm having some trouble building a docker image, because the way the code has been structured. The code is written in C#, and in a solution there is a lot of projects that "support" the application i want to build.
My problem is if i put the dockerfile into the root i can build it, without any problem, and it's okay but i don't think it's the optimal way, because we have some other dockerfiles we also need to build and if i put them all into the root folder i think it will end up messy.
So if i put the dockerfile into the folder with the application, how do i navigate into the root folder to grab the folders i need?
I tried with "../" but from my point of view it didn't seem to work. Is there any way to do it, or what is best practice in this scenario?
TL;DR
run it from the root directory:
docker build . -f ./path/to/dockerfile
the long answer:
in dockerfile you cant really go up.
why
when the docker daemon is building you image, it uses 2 parameters:
your Dockerfile
the context
the context is what you refer to as . in the dockerfile. (for example as COPY . /app)
both of them affect the final image - the dockerfile determines what is going to happen. the context tells docker on which files it should perform the operations you've specified in that dockerfile.
thats how the docs put it:
A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
so, usually the context is the directory where the Dockerfile is placed. my suggestion is to leave it where it belongs. name your dockerfiles after their role (Dockerfile.dev,Dockerfile.prod, etc) thats ok to have a few of them in the same dir.
the context can still be changed:
after all, you are the one that specify the context. since the docker build command accepts the context and the dockerfile path. when i run:
docker build .
i am actually giving it the context of my current directory, (ive omitted the dockerfile path so it defaults to PATH/Dockerfile)
so if you have a dockerfile in dockerfiles/Dockerfile.dev, you shoul place youself in the directory you want as context, and you run:
docker build . -f dockerfiles/Dockerfile.dev
same applies to docker-compose build section (you specify there a context and the dockerfile path)
hope that made sense.
You can use RUN command and after & do whatever you want.
RUN cd ../ &
Related
I have a very strange issue with copying the contents of subdirectories to a Docker container.
This is the directory structure:
Note: There are two Dockerfiles, I use the one on the upper level for test purposes. Ignore the one in the WebApp folder.
I want to copy the directories Bilder and JSON to the container, including all contents, but it doesn't work. The folders in the container will be empty. However, copying the Testdir does indeed work.
This is part of my Dockerfile:
FROM python:3.7-buster
# -- Init --
RUN mkdir -p /app/src
WORKDIR /app/src
ADD WebApp/Testdir ./Testdir #works
ADD WebApp/Bilder ./Bilder #doesn't work
CMD ["sleep", "50"] #to check contents
I build the image as part of a docker-compose.yml file with
docker-compose build test
Does anyone have a clue what's happening here? I've been searching for a solution for quite some time...
If anyone is interested by why this was a problem: it actually had nothing to do with Docker. I was working on a cluster that was not synchronizing my local files to the server correctly, so I solved this issue by checking every time whether the files were actually copied from my local machine to the cluster. Just in case someone has a similar issue, you might be advised to check whether the file accessibility could be the problem.
I am setting up a docker image, in the dockerfile I have an ADD command where source of the ADD command is a variable.
Dockerfile takes a build argument, I want to use that arg as source of the ADD command.
But ADD command is not expanding the variable and I get an error
Please share any workaround that comes in your mind
FROM ubuntu
ARG source_dir
RUN echo ${source_dir}
ADD ${source_dir} ./ContainerDir
Build command
docker build . -t image --build-arg source_dir=/home/john/Desktop/
data
Error
Step 3/3 : ADD ${source_dir} ./ContainerDir ADD failed: stat /var/lib/docker/tmp/docker-builder311119108/home/john/Desktop/
data: no such file or directory
However, the directory (/home/john/Desktop/
data) exists
From the error message, the variable expanded and complained that you don't have the path in your build context:
stat /var/lib/docker/tmp/docker-builder311119108/a/b/c: no such file or directory
In your example, the build context is . (the current directory) so you need a/b/c in the current directory for this to not error. That also need to not be in any ./.dockerignore file if you have one.
From your second edit:
docker build . -t image --build-arg source_dir=/home/john/Desktop/data
It looks like you are trying to include a directory inside your build from outside of the build context. That is explicitly not allowed in docker builds. All files needed for the ADD and COPY commands need to be included in your context, and the entire content of the context is sent to the build server in the first step, so you want to keep this small (rather than sending the entire home directory). The source is always relative to this context, so /home is looking for ./home since your context is . in the build command.
The fix is to move the data directory to be a sub directory of . where you are building your docker images. You can also switch to COPY since there is no functionality of ADD that you need.
Disclaimer: there are two pieces of over simplification here:
The COPY command can include files from different contexts using the --from option to COPY.
The entire context is sent before the build starts with the classic build command. The newer BuildKit implementation is much more selective about how much and what parts of the context to send.
I have a big tar/executable (over 30GB) I COPY/ADD it but this is used only for the installation. Once the application is installed I don't need it anymore.
How can I do? I am trying to use it but:
Everytime I run a build, it takes minutes to define the build context.
I'd like to share this image, if I create a tar with docker save, Is the final version or each layer included in it?
I found some solutions that said I can use RUN wget tar ... && rm tar but I don't want to create webserver for that.
Why isn't possible to mount a volume during build process?! It would be very useful.
Use Docker's multi-stage builds. This mechanism allows you to drop intermediate artifacts and therefore achieve a lightweight image.
Example:
FROM alpine:latest as build
# copy large file
# build
FROM alpine:latest as output
# copy necessary files built in the previous stage
COPY --from=build app /app
Anything built in the build stage will not be included in the final image, unless you explicitly COPY them.
Docs: https://docs.docker.com/develop/develop-images/multistage-build/
This is solvable using 2 different context.
Please follow these steps as mentioned below.
Objective is to create a
docker image that will have you large-build file.
docker image that will have you real codebase/executables.
For this you have to create 2 folders (Build & CodeBase) as follow.
Application<br/>
|---> BUILD <br/>
|======|--->Large-File<br/>
|======|--->Dockerfile<br/>
|--->CodeBase<br/>
|======|--->SRC+Other stuff<br/>
|======|--->Dockerfile<br/>
Build & Codebase both folders will have individual Dockerfile and arrange files accordingly.
Dockerfile(Build)
FROM **Base-Image**
COPY Large-File /tmp/Large-File
Build this and tag it with some name like (base-build-app-image)
#>cd Application <==Application root folder as mentioned above==>
#>docker build -t base-build-app-image BUILD <==path of your build-folder==>
Dockerfile(Codebase)
FROM base-build-app-image
RUN *****
CMD *****
RUN rm -f **/tmp/Large-File**
RUN rm -f **Remove installation files that is not required**
ENTRYPOINT *****
Build this-code-base and base-build-app-image is already in your local docker-repository and your large iso file is not in the current-buid-context
#>cd Application <==Application root folder as mentioned above==>
#>docker build CodeBase <==path of your code-base==>
This time since the context size is only your code base and since this doesn't include that Large file - it will definitely reduce your build time.
You can also take an advance of using docker-compose to do both operations together so you will not have to execute 2 separate commands.
If you need help on preparing this docker-compose file then do let me know in comments.
If anything is not clear then leave a comment or come over chat to fix this issue.
# Update
I just realized that ADD/COPY command doesn't permit any access
to files or directories outside of current working directory in host.
One more thing is that if you specify an absolute path of file/directory
as a source path after ADD/COPY command, it'll also not be permitted.
Please refer to this and have happy hacking ! :)
=======================================================================
I would like to copy/add files under a user's home directory in host
into the container's home directory for the same user.
First of all, a user can be changed as the user who is building a docker image with Dockerfile on each host. For instance, in my host, I have a user "test". In the other person's host, there will be a user "newbie". In each host, my Dockerfile will be built/used.
The following is my test syntax for copying/adding files.
...
RUN mkdir -p /home/${USER}/.ssh
ADD /home/${USER}/.ssh/id_rsa* /home/${USER}/.ssh/
or COPY /home/${USER}/.ssh/id_rsa* /home/${USER}/.ssh/
...
When I try to build this Docker file, the following error is displayed.
Step 43/44 : ADD /home/user/.ssh/id_rsa* /home/${USER}/.ssh/
No source files were specified
Please kindly guide me to do what I want to do. :)
Thanks.
It has now been two years sice the question has been ask, but I want to cite to official documentation here which states the same as what #Sung-Jin Park already found out.
ADD obeys the following rules:
The path must be inside the context of the build; you cannot ADD
../something /something, because the first step of a docker build is
to send the context directory (and subdirectories) to the docker
daemon.
Dockerfile reference ADD
you can use the following:
WORKDIR /home
COPY ${pwd}/my-file.txt .
I used to list the tests directory in .dockerignore so that it wouldn't get included in the image, which I used to run a web service.
Now I'm trying to use Docker to run my unit tests, and in this case I want the tests directory included.
I've checked docker build -h and found no option related.
How can I do this?
Docker 19.03 shipped a solution for this.
The Docker client tries to load <dockerfile-name>.dockerignore first and then falls back to .dockerignore if it can't be found. So docker build -f Dockerfile.foo . first tries to load Dockerfile.foo.dockerignore.
Setting the DOCKER_BUILDKIT=1 environment variable is currently required to use this feature. This flag can be used with docker compose since 1.25.0-rc3 by also specifying COMPOSE_DOCKER_CLI_BUILD=1.
See also comment0, comment1, comment2
from Mugen comment, please note
the custom dockerignore should be in the same directory as the Dockerfile and not in root context directory like the original .dockerignore
i.e.
when calling
DOCKER_BUILDKIT=1
docker build -f /path/to/custom.Dockerfile ...
your .dockerignore file should be at
/path/to/custom.Dockerfile.dockerignore
At the moment, there is no way to do this. There is a lengthy discussion about adding an --ignore flag to Docker to provide the ignore file to use - please see here.
The options you have at the moment are mostly ugly:
Split your project into subdirectories that each have their own Dockerfile and .dockerignore, which might not work in your case.
Create a script that copies the relevant files into a temporary directory and run the Docker build there.
Adding the cleaned tests as a volume mount to the container could be an option here. After you build the image, if running it for testing, mount the source code containing the tests on top of the cleaned up code.
services:
tests:
image: my-clean-image
volumes:
- '../app:/opt/app' # Add removed tests
I've tried activating the DOCKER_BUILDKIT as suggested by #thisismydesign, but I ran into other problems (outside the scope of this question).
As an alternative, I'm creating an intermediary tar by using the -T flag which takes a txt file containing the files to be included in my tar, so it's not so different than a whitelist .dockerignore.
I export this tar and pipe it to the docker build command, and specify my docker file, which can live anywhere in my file hierarchy. In the end it looks like this:
tar -czh -T files-to-include.txt | docker build -f path/to/Dockerfile -
Another option is to have a further build process that includes the tests. The way I do it is this:
If the tests are unit tests then I create a new Docker image that is derived from the main project image; I just stick a FROM at the top, and then ADD the tests, plus any required tools (in my case, mocha, chai and so on). This new 'testing' image now contains both the tests and the original source to be tested. It can then simply be run as is or it can be run in 'watch mode' with volumes mapped to your source and test directories on the host.
If the tests are integration tests--for example the primary image might be a GraphQL server--then the image I create is self-contained, i.e., is not derived from the primary image (it still contains the tests and tools, of course). My tests use environment variables to tell them where to find the endpoint that needs testing, and it's easy enough to get Docker Compose to bring up both a container using the primary image, and another container using the integration testing image, and set the environment variables so that the test suite knows what to test.
Sadly it isn't currently possible to point to a specific file to use for .dockerignore, so we generate it in our build script based on the target/platform/image. As a docker enthusiast it's a sad and embarrassing workaround.