How does Dockerfile ADD command work? - docker

Trying to figure out how to use this command is frustrating. Apparently the files have to be inside the build context but now I've moved the file into the build context (or at least I think I have) and I'm still getting the same error:
INFO[0000] No source files were specified
Here's the directory structure on the host:
/srv/uwsgi/
- Dockerfile
- uwsgi.ini
Here's the pertaining commands of my dockerfile:
FROM ubuntu:trusty
RUN sudo mkdir -p /srv/www/cc/
ADD ["./uwsgi.ini" "/srv/www/uwsgi.ini"]
Tried several variations on the ADD, with ./ and without, having the file outside of context and the full path.. What am I missing?

To ADD files in a Dockerfile:
FROM ubuntu:trusty
RUN sudo mkdir -p /srv/www/cc/
ADD ./uwsgi.ini /srv/www/uwsgi.ini

Related

Docker: COPY failed: file not found in build context (Dockerfile)

I'd like to instruct Docker to COPY my certificates from the local /etc/ folder on my Ubuntu machine.
I get the error:
COPY failed: file not found in build context or excluded by
.dockerignore: stat etc/.auth_keys/fullchain.pem: file does not exist
I have not excluded in .dockerignore
How can I do it?
Dockerfile:
FROM nginx:1.21.3-alpine
RUN rm /etc/nginx/conf.d/default.conf
RUN mkdir /etc/nginx/ssl
COPY nginx.conf /etc/nginx/conf.d
COPY ./etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
COPY ./etc/.auth_keys/privkey.pem /etc/nginx/ssl/
WORKDIR /usr/src/app
I have also tried without the dot --> same error
COPY /etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
COPY /etc/.auth_keys/privkey.pem /etc/nginx/ssl/
By placing the folder .auth_keys next to the Dockerfile --> works, but not desireable
COPY /.auth_keys/fullchain.pem /etc/nginx/ssl/
COPY /.auth_keys/privkey.pem /etc/nginx/ssl/
The docker context is the directory the Dockerfile is located in. If you want to build an image that is one of the restrictions you have to face.
In this documentation you can see how contexts can be switched, but to keep it simple just consider the same directory to be the context. Note; this also doesn't work with symbolic links.
So your observation was correct and you need to place the files you need to copy in the same directory.
Alternatively, if you don't need to copy them but still have them available at runtime you could opt for a mount. I can imagine this not working in your case because you likely need the files at startup of the container.
#JustLudo's answer is correct, in this case. However, for those who have the correct files in the build directory and still seeing this issue; remove any trailing comments.
Coming from a C and javascript background, one may be forgiven for assuming that trailing comments are ignored (e.g. COPY my_file /etc/important/ # very important!), but they are not! The error message won't point this out, as of my version of docker (20.10.11).
For example, the above erroneous line will give an error:
COPY failed: file not found in build context or excluded by .dockerignore: stat etc/important/: file does not exist
... i.e. no mention that it is the trailing # important! that is tripping things up.
It's also important to note that, as mentioned into the docs:
If you use STDIN or specify a URL pointing to a plain text file, the system places the contents into a file called Dockerfile, and any -f, --file option is ignored. In this scenario, there is no context.
That is, if you're running build like this:
docker build -t dh/myimage - < Dockerfile_test
Any COPY or ADD, having no context, will throw the error mentioned or another similar:
failed to compute cache key: "xyz" not found: not found
If you face this error and you're piping your Dockerfile, then I advise to use -f to target a custom Dockerfile.
docker build -t dh/myimage -f Dockerfile_test .
(. set the context to the current directory)
Here is a test you can do yourself :
In an empty directory, create a Dockerfile_test file, with this content
FROM nginx:1.21.3-alpine
COPY test_file /my_test_file
Then create a dummy file:
touch test_file
Run build piping the test Dockerfile, see how it fails because it has no context:
docker build -t dh/myimage - < Dockerfile_test
[..]
failed to compute cache key: "/test_file" not found: not found
[..]
Now run build with -f, see how the same Dockerfile works because it has context:
docker build -t dh/myimage -f Dockerfile_test .
[..]
=> [2/2] COPY test_file /my_test_file
=> exporting to image
[..]
Check your docker-compos.yml, it might be changing the context directory.
I had a similar problem, with the only clarification: I was running Dockerfile with docker-compos.yml
This is what my Dockerfile looked like when I got the error:
FROM alpine:3.17.0
ARG DB_NAME \
DB_USER \
DB_PASS
RUN apk update && apk upgrade && apk add --no-cache \
php \
...
EXPOSE 9000
COPY ./conf/www.conf /etc/php/7.3/fpm/pool.d #<--- an error was here
COPY ./tools /var/www/ #<--- and here
ENTRYPOINT ["sh", "/var/www/start.sh"]
This is part of my docker-compose.yml where I described my service.
wordpress:
container_name: wordpress
build:
context: . #<--- the problem was here
dockerfile: requirements/wordpress/Dockerfile
args:
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PASS: ${DB_PASS}
ports:
- "9000:9000"
depends_on:
- mariadb
restart: unless-stopped
networks:
- inception
volumes:
- wp:/var/www/
My docker-compos.yml was changing the context directory. Then I wrote a new path in the Dockerfile and it all worked.
COPY ./requirements/wordpress/conf/www.conf /etc/php/7.3/fpm/pool.d
COPY ./requirements/wordpress/tools /var/www/
project structure
FWIW this same error shows up when running gcloud builds submit if the files are included in .gitignore
Have you tried doing a simlink with ln -s to the /etc/certs/ folder in the docker build directory?
Alternatively you could have one image that has the certificates and in your image you just COPY FROM the docker image having the certs.
I had the same error. I resolved it by adding this to my Docker build command:
docker build --no-cache -f ./example-folder/example-folder/Dockerfile
This repoints Docker to the home directory. Even if your Dockerfile seems to run (i.e. the system seems to locate it and starts running it), I found I needed to have the home directory pre-defined above, for any copying to happen.
Inside my Dockerfile, I had the file copying like this:
COPY ./example-folder/example-folder /home/example-folder/example-folder
I merely had quoted the source file while building a windows container, e.g.,
COPY "file with space.txt" c:/some_dir/new_name.txt
Docker doesn't like the quotes.

Cannot find generated files in the Docker in the localhost

Let's consider such directory. (Note: A directory ends with \)
root\
|
-- some stuff
|
-- application\
| |
| -- app_stuff
| |
| -- out\
| |
| -- main.cpp
|
-- some stuff
I'm trying to build this app via docker.
The Dockerfile looks like:
FROM emscripten/emsdk:latest
RUN apt-get -q update
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN em++ application/main.cpp -o application/out/app.html
RUN pip3 install aiohttp
RUN pip3 install aiohttp_jinja2
RUN pip3 install jinja2
RUN ls application/out
The docker-compose looks like:
version: '3.8'
services:
application:
build: .
volumes:
- ./application/out:/app/application/out
command: python3 application/entry.py
ports:
- "8080:8080"
As you may notice in Dockerfile (RUN em++ application/main.cpp -o application/out/app.html), whereas docker is processing it generates new files to the out-directory. However, once it's done I can't find those files.
Note: These files appear in application\out in container.
...
Step 10/10 : RUN ls application/out
---> Running in 603f6b99f4b0
app.html
app.js
app.wasm
...
Where have I admitted a mistake?
The Dockerfile gives instructions on how to build a docker image, and not on what happens in the live container.
If you mount a volume, either via docker-compose or via a docker run command, either way, the volume will only be mounted once the container is created.
So what happens is
first docker creates the image executing the commands in the Dockerfile, and stores the image as an image
then docker will create a container using the stored image
then docker will mount the volumes you defined in the docker-compose.yml file. (At this point if anything is already present in the target directory, either the mount will fail or the original content of the target directory will be moved to a 'lost-and-found' directory)
then the entrypoint or cmd command is run (so here that would be python3 application/entry.py)
So if you need to get the output files out in your host directory, you either need to create those files in the entrypoint script of copy them in the entrypoint script
so you can create a file you call myscript.sh with the following
#!/bin/bash
em++ /app/application/main.cpp -o /app/application/out/app.html
python3 /app/application/entry.py
in your Dockerfile you remove the line RUN em++ application/main.cpp -o application/out/app.html and replace it with
COPY ./myscript.sh /
ENTRYPOINT /myscript.sh
and you remove the line command: python3 application/entry.py from your docker-compose.yml file.
You can use the CMD command rather than ENTRYPOINT if you prefer, that's just a matter of personal preference.
A Docker-compose volume can link a directory on the host to a directory inside of a container. You are overwriting the /app/application/out directory inside of the container with a volume to the host's ./application/out, effectively erasing any contents of /app/application/out originating from your built image.
Given the context, I presume your host's ./application/out directory is empty and you are overwriting the container's /app/application/out directory with nothing. You can test this by removing the volumes tag and see if the application is able to find files under /app/application/out afterwards.
Unrelated to your issue, take into consideration that your apt-get update command will cache Debian remote repository lists in your built image; this adds wasted space to your final image. See this post about deleting the cached lists.

Docker RUN Not Finding an Executable File

I am having issues setting up a Dockerfile in Ubuntu. I tried the following command:
sudo docker build -t chaste .
But when it reaches to the following command:
RUN chmod +x chaste.sh && ./chaste.sh -q && rm -f chaste.sh
I get the following error:
chmod: cannot access 'chaste.sh': No such file or directory
However, chaste.sh is in the current directory. I am not sure why it complains about not being able to find it.
I would appreciate it if someone could help me out.
To use the file from current directory you should add it from build context to the container by adding the following command above RUN command in your Dockerfile:
ADD ./chaste.sh ./chaste.sh

Why is git clone failing when I build an image from a dockerfile?

FROM ansible/ansible:ubuntu1604
MAINTAINER myname
RUN git clone http://github.com/ansible/ansible.git /tmp/ansible
RUN git clone http://github.com/othertest.git /tmp/othertest
WORKDIR /tmp/ansible
ENV PATH /tmp/ansible/bin:/sbin:/usr/sbin:/usr/bin:bin
ENV PYTHONPATH /tmp/ansible/lib:$PYTHON_PATH
ADD inventory /etc/ansible/hosts
WORKDIR /tmp/
EXPOSE 8888
When I build from this dockerfile, I get Cloning into /tmp/ansible and othertest in red text (I assume is an error). When I then run the container and peruse around, I see that all my steps from the dockerfile built correctly other than the git repositories which are missing.
I can't figure out what I'm doing wrong, I'm assuming its a simple mistake.
Building dockerfile:
sudo docker build --no-cache -f Dockerfile .
Running dockerfile:
sudo docker run -I -t de32490234 /bin/bash
The short answer:
Put your files anywhere other than in /tmp and things should work fine.
The longer answer:
You're basing your image on the ansible/ansible:ubuntu1604 image. If you inspect this image via docker inspect ansible/ansible:ubuntu1604 or look at the Dockerfile from which it was built, you will find that it contains a number of volume mounts. The relevant line from the Dockerfile is:
VOLUME /sys/fs/cgroup /run/lock /run /tmp
That means that all of those directories are volume mount points, which means any data placed into them will not be committed as part of the image build process.
Looking at your Dockerfile, I have two comments unrelated to the above:
You're explicitly setting the PATH environment variable, but you're neglecting to include /bin, which will cause all kinds of problems, such as:
$ docker run -it --rm bash
docker: Error response from daemon: oci runtime error: exec: "bash": executable file not found in $PATH.
You're using WORKDIR twice, but the first time (WORKDIR /tmp/ansible) you're not actually doing anything that cares what directory you're in (you're just setting some environment variables and copying a file into /etc/ansible).

Running composer install in docker container

I have a docker-compose.yml script which looks like this:
version: '2'
services:
php:
build: ./docker/php
volumes:
- .:/var/www/website
The DockerFile located in ./docker/php looks like this:
FROM php:fpm
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php composer-setup.php
RUN php -r "unlink('composer-setup.php');"
RUN mv composer.phar /usr/local/bin/composer
RUN composer update -d /var/www/website
Eventho this always fails with the error
[RuntimeException]
Invalid working directory specified, /var/www/website does not exist.
When I remove the RUN composer update line and enter the container, the directory does exist and contains my project code.
Please tell me if I am doing anything wrong OR if I'm doing the composer update on a wrong place
RUN ... lines are run when the image is being built.
Volumes are attached to the container. You have at least two options here:
use COPY command to, well, copy your app code to the image so that all commands after that command will have access to it. (Do not push the image to any public Docker repo as it will contain your source that you probably don't want to leak)
install composer dependencies with command run on your container (CMD or ENTRYPOINT in Dockerfile or command option in docker-compose)
You are mounting your local volume over your build directory so anything you built in '/var/www/website' will be mounted over by your local volume when the container runs.

Resources