I'd like to instruct Docker to COPY my certificates from the local /etc/ folder on my Ubuntu machine.
I get the error:
COPY failed: file not found in build context or excluded by
.dockerignore: stat etc/.auth_keys/fullchain.pem: file does not exist
I have not excluded in .dockerignore
How can I do it?
Dockerfile:
FROM nginx:1.21.3-alpine
RUN rm /etc/nginx/conf.d/default.conf
RUN mkdir /etc/nginx/ssl
COPY nginx.conf /etc/nginx/conf.d
COPY ./etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
COPY ./etc/.auth_keys/privkey.pem /etc/nginx/ssl/
WORKDIR /usr/src/app
I have also tried without the dot --> same error
COPY /etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
COPY /etc/.auth_keys/privkey.pem /etc/nginx/ssl/
By placing the folder .auth_keys next to the Dockerfile --> works, but not desireable
COPY /.auth_keys/fullchain.pem /etc/nginx/ssl/
COPY /.auth_keys/privkey.pem /etc/nginx/ssl/
The docker context is the directory the Dockerfile is located in. If you want to build an image that is one of the restrictions you have to face.
In this documentation you can see how contexts can be switched, but to keep it simple just consider the same directory to be the context. Note; this also doesn't work with symbolic links.
So your observation was correct and you need to place the files you need to copy in the same directory.
Alternatively, if you don't need to copy them but still have them available at runtime you could opt for a mount. I can imagine this not working in your case because you likely need the files at startup of the container.
#JustLudo's answer is correct, in this case. However, for those who have the correct files in the build directory and still seeing this issue; remove any trailing comments.
Coming from a C and javascript background, one may be forgiven for assuming that trailing comments are ignored (e.g. COPY my_file /etc/important/ # very important!), but they are not! The error message won't point this out, as of my version of docker (20.10.11).
For example, the above erroneous line will give an error:
COPY failed: file not found in build context or excluded by .dockerignore: stat etc/important/: file does not exist
... i.e. no mention that it is the trailing # important! that is tripping things up.
It's also important to note that, as mentioned into the docs:
If you use STDIN or specify a URL pointing to a plain text file, the system places the contents into a file called Dockerfile, and any -f, --file option is ignored. In this scenario, there is no context.
That is, if you're running build like this:
docker build -t dh/myimage - < Dockerfile_test
Any COPY or ADD, having no context, will throw the error mentioned or another similar:
failed to compute cache key: "xyz" not found: not found
If you face this error and you're piping your Dockerfile, then I advise to use -f to target a custom Dockerfile.
docker build -t dh/myimage -f Dockerfile_test .
(. set the context to the current directory)
Here is a test you can do yourself :
In an empty directory, create a Dockerfile_test file, with this content
FROM nginx:1.21.3-alpine
COPY test_file /my_test_file
Then create a dummy file:
touch test_file
Run build piping the test Dockerfile, see how it fails because it has no context:
docker build -t dh/myimage - < Dockerfile_test
[..]
failed to compute cache key: "/test_file" not found: not found
[..]
Now run build with -f, see how the same Dockerfile works because it has context:
docker build -t dh/myimage -f Dockerfile_test .
[..]
=> [2/2] COPY test_file /my_test_file
=> exporting to image
[..]
Check your docker-compos.yml, it might be changing the context directory.
I had a similar problem, with the only clarification: I was running Dockerfile with docker-compos.yml
This is what my Dockerfile looked like when I got the error:
FROM alpine:3.17.0
ARG DB_NAME \
DB_USER \
DB_PASS
RUN apk update && apk upgrade && apk add --no-cache \
php \
...
EXPOSE 9000
COPY ./conf/www.conf /etc/php/7.3/fpm/pool.d #<--- an error was here
COPY ./tools /var/www/ #<--- and here
ENTRYPOINT ["sh", "/var/www/start.sh"]
This is part of my docker-compose.yml where I described my service.
wordpress:
container_name: wordpress
build:
context: . #<--- the problem was here
dockerfile: requirements/wordpress/Dockerfile
args:
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PASS: ${DB_PASS}
ports:
- "9000:9000"
depends_on:
- mariadb
restart: unless-stopped
networks:
- inception
volumes:
- wp:/var/www/
My docker-compos.yml was changing the context directory. Then I wrote a new path in the Dockerfile and it all worked.
COPY ./requirements/wordpress/conf/www.conf /etc/php/7.3/fpm/pool.d
COPY ./requirements/wordpress/tools /var/www/
project structure
FWIW this same error shows up when running gcloud builds submit if the files are included in .gitignore
Have you tried doing a simlink with ln -s to the /etc/certs/ folder in the docker build directory?
Alternatively you could have one image that has the certificates and in your image you just COPY FROM the docker image having the certs.
I had the same error. I resolved it by adding this to my Docker build command:
docker build --no-cache -f ./example-folder/example-folder/Dockerfile
This repoints Docker to the home directory. Even if your Dockerfile seems to run (i.e. the system seems to locate it and starts running it), I found I needed to have the home directory pre-defined above, for any copying to happen.
Inside my Dockerfile, I had the file copying like this:
COPY ./example-folder/example-folder /home/example-folder/example-folder
I merely had quoted the source file while building a windows container, e.g.,
COPY "file with space.txt" c:/some_dir/new_name.txt
Docker doesn't like the quotes.
Related
In my Dockerfile I've got :
ADD ../../myapp.war /opt/tomcat7/webapps/
That file exists as ls ../../myapp.war returns me the correct file but when I execute sudo docker build -t myapp . I've got :
Step 1 : ADD ../../myapp.war /opt/tomcat7/webapps/
2014/07/02 19:18:09 ../../myapp.war: no such file or directory
Does somebody know why and how to do it correctly?
cd to your parent directory instead
build the image from the parent directory, specifying the path to your Dockerfile
docker build -t <some tag> -f <dir/dir/Dockerfile> .
In this case, the context of the docker will be switched to the parent directory and accessible for ADD and COPY
With docker-compose, you could set context folder:
# docker-compose.yml
version: '3.3'
services:
yourservice:
build:
context: ./
dockerfile: ./docker/yourservice/Dockerfile
Unfortunately, (for practical and security reasons I guess), if you want to add/copy local content, it must be located under the same root path than the Dockerfile.
From the documentation:
The <src> path must be inside the context of the build; you
cannot ADD ../something/something, because the first step of a docker
build is to send the context directory (and subdirectories) to the
docker daemon.
EDIT: There's now an option (-f) to set the path of your Dockerfile ; it can be used to achieve what you want, see #Boedy 's response.
Adding some code snippets to support the accepted answer.
Directory structure :
setup/
|__docker/DockerFile
|__target/scripts/<myscripts.sh>
src/
|__<my source files>
Docker file entry:
RUN mkdir -p /home/vagrant/dockerws/chatServerInstaller/scripts/
RUN mkdir -p /home/vagrant/dockerws/chatServerInstaller/src/
WORKDIR /home/vagrant/dockerws/chatServerInstaller
#Copy all the required files from host's file system to the container file system.
COPY setup/target/scripts/install_x.sh scripts/
COPY setup/target/scripts/install_y.sh scripts/
COPY src/ src/
Command used to build the docker image
docker build -t test:latest -f setup/docker/Dockerfile .
Since -f caused another problem, I developed another solution.
Create a base image in the parent folder
Added the required files.
Used this image as a base image for the project which in a descendant folder.
The -f flag does not solved my problem because my onbuild image looks for a file in a folder and had to call like this:
-f foo/bar/Dockerfile foo/bar
instead of
-f foo/bar/Dockerfile .
Also note that this is only solution for some cases as -f flag
Let's say you have your directories tree like this:
dir0
├───dir1
│ └───dir11
| | └───dockerfile
| └───dir12 (current)
└───dir2 (content to be copied)
and your dockerfile look like this:
FROM baseImage
COPY / /content
Let's say you want to copy dir2 content into a new docker image using COPY or ADD of dockerfile that is in dir11 and your current directory is dir12
You will have to run this command in order to build your image properly:
docker build -t image-name:tag -f ../dir11/dockerfile ../../dir2
-t your-image-name Name and optionally a tag in the 'name:tag' format
-f ../dir11/dockerfile Name of the Dockerfile (Default is 'PATH/Dockerfile')
../../dir2 path to be current for COPY or ADD commands
Update
Let's say you run this by mistake:
docker build -t image-name:tag -f ../dir11/dockerfile ../../
This will not solve your problem because in this case the COPY / /content will look like it's copying dir0 content (dir1 & dir2) so in order to fix that you can either change the command using the right path or you can also change the COPY source path in the dockerfile like this:
COPY /dir2 /content
Given the following setup:
+ parent
+ service1
- some_file.json
+ service2
- Dockerfile
If you have a Dockerfile in service2 and want to copy some_file.json from service1, you can run this inside the service2 directory:
docker build -t my_image ../ --file Dockerfile
This will set the target context one level above. The tricky part here is that the target Dockerfile is set explicitly (as it is not in the target context).
In service2/Dockerfile, you must issue the COPY command as if the file were one level above:
COPY service1/some_file.json /target/location/some_file.json
instead of
COPY ../service1/some_file.json /target/location/some_file.json # will not work
The solution for those who use composer is to use a volume pointing to the parent folder:
#docker-composer.yml
foo:
build: foo
volumes:
- ./:/src/:ro
But I'm pretty sure the can be done playing with volumes in Dockerfile.
If you are using skaffold, use 'context:' to specify context location for each image dockerfile - context: ../../../
apiVersion: skaffold/v2beta4
kind: Config
metadata:
name: frontend
build:
artifacts:
- image: nginx-angular-ui
context: ../../../
sync:
# A local build will update dist and sync it to the container
manual:
- src: './dist/apps'
dest: '/usr/share/nginx/html'
docker:
dockerfile: ./tools/pipelines/dockerfile/nginx.dev.dockerfile
- image: webapi/image
context: ../../../../api/
docker:
dockerfile: ./dockerfile
deploy:
kubectl:
manifests:
- ./.k8s/*.yml
skaffold run -f ./skaffold.yaml
build the img from an upper dir
name the img
enable proper volume sharing
check the Makefile in the link above on how-to start the container ...
docker build . -t proj-devops-img --no-cache --build-arg UID=$(shell id -u) --build-arg GID=$(shell id -g) -f src/docker/devops/Dockerfile
Let setting context: ../../ in parent folder in docker-compose.yml
Ex:
app:
build:
context: ../../
dockerfile: ./source/docker/dockerfile/Dockerfile
I want to copy the contents of a parent directory (relative to the position of the Dockerfile) into my image.
This is the folder structure:
app-root/
docker/
php81aws/
some-folder
Dockerfile
start-container
supervisord.conf
app_folders
app_files
I'm calling docker build as follows:
app-root#> docker build -t laravel -f docker/php81aws/Dockerfile .
Or from docker compose with:
services:
laravel:
build:
context: ./
dockerfile: docker/php81aws/Dockerfile
Therefore, the context should be in the app-root directory.
In the dockerfile, I'm using COPY like so:
COPY docker/php81aws/start-container /usr/local/bin/start-container
COPY docker/php81aws/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN chmod +x /usr/local/bin/start-container
COPY --chown=www:www . /var/www/
It always gives me this error:
failed to compute cache key: "/start-container" not found: not found
I tried to use COPY ./docker/php81aws/start-container and COPY start-container but the error is always the same. Of course copying the parent directory also fails.
You mention in a comment that your top-level app-root directory has a .dockerignore file that excludes the entire docker directory. While the Dockerfile will still be available, nothing else in that tree can be COPYed into the image, and if you COPY ./ ./ to copy the entire build context into the image, that directory won't be present.
Deleting this line from the .dockerignore file should fix your issue.
In general you want the .dockerignore file to include anything that's part of your host build-and-test environment, but should not by default be included in an image, possibly because the Dockerfile is going to rebuild it. In a Node context, for example, you almost always want to exclude the host's node_modules directory, or in a Java context often the Gradle build or Maven target directories. Anything you do want to include in the image needs to not be listed in .dockerignore.
I'm trying to up a docker from an existing project to run our integration test.
The solution file has directory navigation to a folder outside my context configured in docker-compose (build:context). This is my docker-compose.yml file
services:
integration:
container_name: backend_integration
build:
context: .
dockerfile: buildscripts/backend-integration.Dockerfile
ports:
- 8080:80
I'm running docker-compose up integration
My dockerfile :
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
WORKDIR /app
COPY ./Directory.Build.targets .
COPY ./src/services/{projectName}/project.sln .
COPY ./src/services/{projectName}/{projectName}/projectName.csproj ./{projectName}/
COPY ./src/services/{projectName}/{projectName}Test/projectNameTest.csproj ./{projectName}Test/
COPY ./src/services/{projectName}/{projectName}/nuget.config .
RUN dotnet restore --configfile nuget.config // This is where DBCommom and DataEntities were called
My solution file (sln) has these references
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "projectNameTest", "projectNameTest\projectNameTest.csproj", "{3038F569-2095-4B0D-9531-EF28424E47FB}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "DBCommon", "..\..\..\..\..\pkg\Common\DBCommon\DBCommon.csproj", "{4B4D0CB1-D023-4985-A871-204C43FB2F0A}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "DataEntities", "..\..\..\..\..\pkg\Common\DataEntities\DataEntities.csproj", "{6BAF95C4-667F-4AC4-99EC-EB99DC1DF3B7}"
EndProject
PS: the follow error
/usr/share/dotnet/sdk/2.2.207/NuGet.targets(246,5): error MSB3202: The project file "/pkg/Common/DBCommon/DBCommon.csproj" was not found. [/app/projectName.sln]
/usr/share/dotnet/sdk/2.2.207/NuGet.targets(246,5): error MSB3202: The project file "/pkg/Common/DataEntities/DataEntities.csproj" was not found. [/app/projectName.sln]
ERROR: Service 'integration' failed to build: The command '/bin/sh -c dotnet restore --configfile nuget.config' returned a non-zero code: 1
Quick answer. You cannot.
A little longer answer:
For Docker, it is always Dockerfile and below that Docker can access.
The COPY commands in the Dockerfile are relative to the current folder.
You might have to re-arrange your project structure. Move your Dockerfile at the root, probably next to the solution file, and adjust all the project references relative to that folder.
You should also focus on .dockerignore file and mention all the files/folders that you don't want copied. This'll help you reduce the context that gets copied inside the docker image. (generally the first line of docker build command).
I have a local project directory structure like:
config
test
docker-compose.yaml
DockerFile
pip-requirements.txt
src
app
app.py
I'm trying to use Docker to spin up a container to run app.py. Simple in concept, but this has proven extraordinarily difficult. I'm keeping my Docker files in a separate sub-folder because I plan on having a large number of different environments, and I don't want to clutter my top-level folder with dozens of files like Dockerfile.1, Dockerfile.2, etc.
My docker-compose.yaml looks like:
version: '3'
services:
worker:
image: myname:mytag
build:
context: .
dockerfile: ./Dockerfile
volumes:
- ./src/app:/usr/local/myproject/src/app
My Dockerfile looks like:
FROM python:2.7
# Set the working directory.
WORKDIR /usr/local/myproject/src/app
# Copy the current directory contents into the container.
COPY src/app /usr/local/myproject/src/app
COPY pip-requirements.txt pip-requirements.txt
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
# Define environment variable
ENV PYTHONUNBUFFERED 1
CMD ["./app.py"]
If I run from the top-level directory of my project:
docker-compose -f config/test/docker-compose.yaml up
it succeeds in building the image, but fails when attempting to run the image with the error:
ERROR: for worker Cannot start service worker: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"./app.py\": stat ./app.py: no such file or directory": unknown
If I inspect the image's filesystem with:
docker run --rm -it --entrypoint=/bin/bash myname:mytag
it correctly dumps me into /usr/local/myproject/src/app. However, this directory is empty, explaining the runtime error. Why is this empty? Shouldn't the COPY statement and volumes have populated the image with my application code?
For one, you're clobbering the data set by including the content during the build stage and then using docker-compose to overlay a directory on top of it. Let's first discuss the differences between the Dockerfile (Image) and the Docker-compose (Runtime)
Normally, you would use the COPY directive in the dockerfile to copy a component of your local directory into the image so that it is immutable. In most application deployments, this means we bundle our entire application into the directory and prepare it to run. This means that it is not dynamic (Meaning changes you make to the code after that are not visible in the container) but is a gain in terms of security.
Docker-compose is a runtime specification meaning, "Once I have an image, I want to programmatically define how it runs". By defining a volume here, you're saying "I want the local directory (From the perspective of the compose file) /src/app to be overlaid onto /usr/local/myproject/src/app
Thus anything you built into the image doesn't really matter. You're adding another layer on top of the image which will take precedance over what was built into the image.
It may also be something to do with you specifying the Workdir already and then specifying a ./ reference in the CMD. Would be worth trying it as just CMD ["app.py"]
What happens if you
Build the image: docker build -t "test" .
Run the image manually : "docker run --rm -it test
I have a simple Dockerfile:
FROM php:7.1-apache
LABEL maintainer="rburton#agsource.com"
COPY C:/Users/rburton/code/MyAgsourceAPI /var/www
It is the last line that is giving me problems. I am copying from a Windows structure to a docker container (Linux I assume). When I build this image I get:
...
Step 3/3 : COPY C:/Users/rburton/code/MyAgsourceAPI /var/www
COPY failed: stat /var/lib/docker/tmp/dockerbuilder720374851/C:/Users/rburton/code/MyAgsourceAPI: no such file or directory
First, something is preventing the recognition that this is an absolute path and naturally if the path is pre-pended with /var/lib/docker/tmp/dockerbuilder720374851 then the file will not be found. Second, I have tried / and \ but all with the same result. Also the drive letter I suspect is confusing to docker. So the question is how do I copy files and folders (along with the contents) from a Windows folder to a docker container?
First, change your Dockerfile to:
FROM php:7.1-apache
LABEL maintainer="rburton#agsource.com"
COPY MyAgsourceAPI /var/www
Then, to go your code directory: cd Users/rburton/code.
Within that directory, run:
docker build -t <image_name> .
Another tip that might be helpful, I've seen same issue while running build from correct context, and issue remained until I've used all small caps on src folder that I wanted to copy from. eg:
COPY MyAgsourceAPI /var/www -> COPY myagsourceapi /var/www
The root of the path is relative to the Dockerfile and not your Windows filesystem.
If for example your filesystem is layed out like this:
+-+-MyProject
|
+---Dockerfile
|
+-+-build
|
+---MyAgsourceAPI
You can use:
COPY /build/MyAgsourceAPI /var/www
Note that "MyProject" (or anything above it) is excluded from the path.