How to ignore folders to send in docker build context - docker

I am facing an issue of large docker build context because of my project structure. In my root directory I have lib folder for common code and folders of micro-services. Now I want to build for miscroservice1 to include only lib folder and to ignore other microservices.
I am running docker build command in root folder because running command in microservice folder giving error Forbidden path outside the build context
rootFolder
-- lib
-- microservice1/Dockerfile
-- microservice2/Dockerfile
-- microservice3/Dockerfile
I have two solutions but didn't try for now
To add symlinks for lib in my microservice folder
To write script for each docker build to add lib folder in specific microservice folder and then run docker build.
I am trying the above two solutions. Can anyone suggest any best practice?

You can create .dockerignore in your root directory and add
microservice1/
microservice2/
microservice3/
to it, just like .gitignore does during tracking files, docker will ignore these folders/files during the build.
Update
You can include docker-compose.yml file in your root directory, look at docker-compose for all the options, such as setting environment, running a specific command, etc, that you can use during the build process.
version: "3"
services:
microservice1:
build:
context: .
dockerfile: ./microservice1/Dockerfile
volumes:
- "./path/to/share:/path/to/mount/on/container"
ports:
- "<host>:<container>"
links:
- rootservice # defines a dns record in /etc/hosts to point to rootservice
microservice2:
build:
context: .
dockerfile: ./microservice2/Dockerfile
volumes:
- "./path/to/share:/path/to/mount/on/container"
ports:
- "<host>:<container>"
links:
- rootservice # defines a dns record in /etc/hosts to point to rootservice
- microservice1
rootservice:
build:
context: .
dockerfile: ./Dockerfile
volumes:
- "./path/to/share:/path/to/mount/on/container"
ports:
- "<host>:<container>"
depends_on:
- microservice1
- microservice2
ports:
- "<host1>:<container1>"
- "<host2>:<container2>"
This will be your build recipe for your microservices, you can now run docker-compose build to build all your images.

If the only tool you have is Docker, there aren't very many choices. The key problem is that there is only one .dockerignore file. That means you always have to use your project root directory as the Docker context directory (including every services' sources), but you can tell Docker which specific Dockerfile within that to use. (Note that all COPY directives will be relative to the rootFolder in this case.)
docker build rootFolder -f microservice1/Dockerfile -t micro/service1:20190831.01
In many languages there is a way to package up the library (C .a, .h, and .so files; Java .jar files; Python wheels; ...). If your language supports that, another option is to build the library, then copy (not symlink) the library into each service's build tree. Using Python's wheel format as an example:
pip wheel ./lib
cp microlib.whl microservice1
docker build microservice1 -t micro/service1:20190831.01
# Dockerfile needs to
# RUN pip install ./microlib.whl
Another useful variant on this is a manual multi-stage build. You can have lib/Dockerfile pick some base image, and then install the library into that base image. Then each service's Dockerfile starts FROM the library image, and has it preinstalled. Using a C library as an example:
# I am lib/Dockerfile
# Build stage
FROM ubuntu:18.04 AS build
RUN apt-get update && apt-get install build-essential
WORKDIR /src
COPY ./ ./
RUN ./configure --prefix=/usr/local && make
# This is a typical pattern implemented by GNU Autoconf:
# it actually writes files into /src/out/usr/local/...
RUN make install DESTDIR=/src/out
# Install stage -- service images are based on this
FROM ubuntu:18.04
COPY --from=build /src/out /
RUN ldconfig
# I am microservice1/Dockerfile
ARG VERSION=latest
FROM micro/lib:${VERSION}
# From the base image, there are already
# /usr/local/include/microlib.h and /usr/local/lib/libmicro.so
COPY ...
RUN gcc ... -lmicro
CMD ...
There is also usually an option (again, depending on your language and its packaging system) to upload your built library to some server, possibly one you're running yourself. (A Python pip requirements.txt file can contain an arbitrary HTTP URL for a wheel, for example.) If you do this then you can just declare your library as an ordinary dependency, and this problem goes away.
Which of these works better for you depends on your language and runtime, and how much automation of multiple coordinated docker build commands you're willing to do.

Related

Docker: COPY failed: file not found in build context (Dockerfile)

I'd like to instruct Docker to COPY my certificates from the local /etc/ folder on my Ubuntu machine.
I get the error:
COPY failed: file not found in build context or excluded by
.dockerignore: stat etc/.auth_keys/fullchain.pem: file does not exist
I have not excluded in .dockerignore
How can I do it?
Dockerfile:
FROM nginx:1.21.3-alpine
RUN rm /etc/nginx/conf.d/default.conf
RUN mkdir /etc/nginx/ssl
COPY nginx.conf /etc/nginx/conf.d
COPY ./etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
COPY ./etc/.auth_keys/privkey.pem /etc/nginx/ssl/
WORKDIR /usr/src/app
I have also tried without the dot --> same error
COPY /etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
COPY /etc/.auth_keys/privkey.pem /etc/nginx/ssl/
By placing the folder .auth_keys next to the Dockerfile --> works, but not desireable
COPY /.auth_keys/fullchain.pem /etc/nginx/ssl/
COPY /.auth_keys/privkey.pem /etc/nginx/ssl/
The docker context is the directory the Dockerfile is located in. If you want to build an image that is one of the restrictions you have to face.
In this documentation you can see how contexts can be switched, but to keep it simple just consider the same directory to be the context. Note; this also doesn't work with symbolic links.
So your observation was correct and you need to place the files you need to copy in the same directory.
Alternatively, if you don't need to copy them but still have them available at runtime you could opt for a mount. I can imagine this not working in your case because you likely need the files at startup of the container.
#JustLudo's answer is correct, in this case. However, for those who have the correct files in the build directory and still seeing this issue; remove any trailing comments.
Coming from a C and javascript background, one may be forgiven for assuming that trailing comments are ignored (e.g. COPY my_file /etc/important/ # very important!), but they are not! The error message won't point this out, as of my version of docker (20.10.11).
For example, the above erroneous line will give an error:
COPY failed: file not found in build context or excluded by .dockerignore: stat etc/important/: file does not exist
... i.e. no mention that it is the trailing # important! that is tripping things up.
It's also important to note that, as mentioned into the docs:
If you use STDIN or specify a URL pointing to a plain text file, the system places the contents into a file called Dockerfile, and any -f, --file option is ignored. In this scenario, there is no context.
That is, if you're running build like this:
docker build -t dh/myimage - < Dockerfile_test
Any COPY or ADD, having no context, will throw the error mentioned or another similar:
failed to compute cache key: "xyz" not found: not found
If you face this error and you're piping your Dockerfile, then I advise to use -f to target a custom Dockerfile.
docker build -t dh/myimage -f Dockerfile_test .
(. set the context to the current directory)
Here is a test you can do yourself :
In an empty directory, create a Dockerfile_test file, with this content
FROM nginx:1.21.3-alpine
COPY test_file /my_test_file
Then create a dummy file:
touch test_file
Run build piping the test Dockerfile, see how it fails because it has no context:
docker build -t dh/myimage - < Dockerfile_test
[..]
failed to compute cache key: "/test_file" not found: not found
[..]
Now run build with -f, see how the same Dockerfile works because it has context:
docker build -t dh/myimage -f Dockerfile_test .
[..]
=> [2/2] COPY test_file /my_test_file
=> exporting to image
[..]
Check your docker-compos.yml, it might be changing the context directory.
I had a similar problem, with the only clarification: I was running Dockerfile with docker-compos.yml
This is what my Dockerfile looked like when I got the error:
FROM alpine:3.17.0
ARG DB_NAME \
DB_USER \
DB_PASS
RUN apk update && apk upgrade && apk add --no-cache \
php \
...
EXPOSE 9000
COPY ./conf/www.conf /etc/php/7.3/fpm/pool.d #<--- an error was here
COPY ./tools /var/www/ #<--- and here
ENTRYPOINT ["sh", "/var/www/start.sh"]
This is part of my docker-compose.yml where I described my service.
wordpress:
container_name: wordpress
build:
context: . #<--- the problem was here
dockerfile: requirements/wordpress/Dockerfile
args:
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PASS: ${DB_PASS}
ports:
- "9000:9000"
depends_on:
- mariadb
restart: unless-stopped
networks:
- inception
volumes:
- wp:/var/www/
My docker-compos.yml was changing the context directory. Then I wrote a new path in the Dockerfile and it all worked.
COPY ./requirements/wordpress/conf/www.conf /etc/php/7.3/fpm/pool.d
COPY ./requirements/wordpress/tools /var/www/
project structure
FWIW this same error shows up when running gcloud builds submit if the files are included in .gitignore
Have you tried doing a simlink with ln -s to the /etc/certs/ folder in the docker build directory?
Alternatively you could have one image that has the certificates and in your image you just COPY FROM the docker image having the certs.
I had the same error. I resolved it by adding this to my Docker build command:
docker build --no-cache -f ./example-folder/example-folder/Dockerfile
This repoints Docker to the home directory. Even if your Dockerfile seems to run (i.e. the system seems to locate it and starts running it), I found I needed to have the home directory pre-defined above, for any copying to happen.
Inside my Dockerfile, I had the file copying like this:
COPY ./example-folder/example-folder /home/example-folder/example-folder
I merely had quoted the source file while building a windows container, e.g.,
COPY "file with space.txt" c:/some_dir/new_name.txt
Docker doesn't like the quotes.

How to copy a subproject to the container in a multi container Docker app with Docker Compose?

I want to build a multi container docker app with docker compose. My project structure looks like this:
docker-compose.yml
...
webapp/
...
Dockerfile
api/
...
Dockerfile
Currently, I am just trying to build and run the webapp via docker compose up with the correct build context. When building the webapp container directly via docker build, everything runs smoothly.
However, with my current specifications in the docker-compose.yml the line COPY . /webapp/ in webapp/Dockerfile (see below) copies the whole parent project to the container, i.e. the directory which contains the docker-compose.yml, and not just the webapp/ sub directory.
For some reason the line COPY requirements.txt /webapp/ works as expected.
What is the correct way of specifying the build context in docker compose? Why is the . in the Dockerfile interpretet as relative to the docker-compose.yml, while the requirements.txt is relative to the Dockerfile as expected? What am I missing?
Here are the contents of the docker-compose.yml:
version: "3.8"
services:
frontend:
container_name: "pc-frontend"
volumes:
- .:/webapp
env_file:
- ./webapp/.env
build:
context: ./webapp
ports:
- 5000:5000
and webapp/Dockerfile:
FROM python:3.9-slim
# set environment variables
ENV PYTHONWRITEBYTECODE 1
ENV PYTHONBUFFERED 1
# set working directory
WORKDIR /webapp
# copy dependencies
COPY requirements.txt /webapp/
# install dependencies
RUN pip install -r requirements.txt
# copy project
COPY . /webapp/ # does not work as intended
# add entrypoint to app
# ENTRYPOINT ["start-gunicorn.sh"]
CMD [ "ls", "-la" ] # for debugging
# expose port
EXPOSE 5000
The COPY directive is (probably) working the way you expect. But, you have volumes: that are overwriting the image content with something else. Delete the volumes: block.
The image build sequence is working exactly the way you expect. build: { context: ./webapp } uses the webapp subdirectory as the build context and sends it to the Docker daemon. When the Dockerfile for example COPY requirements.txt . it comes out of this directory. If you, for example, docker-compose run frontend pip freeze, you should see the installed Python packages.
After the image is built, Compose starts a container, and at that point volumes: take effect. When you say volumes: ['.:/webapp'], here the . before the colon refers to the directory containing the docker-compose.yml file (and not the webapp subdirectory), and then it hides everything in the /webapp directory in the container. So you're replacing the image's /webapp (which had been built from the webapp subdirectory) with the current directory on the host (one directory higher).
You should usually be able to successfully combine an ordinary host-based development environment and a Docker deployment setup. Use a non-Docker Python virtual environment to build the application and run its unit tests, then use docker-compose up --build to run integration tests and the complete application. With a setup like this, you don't need to deal with the inconveniences of the Python runtime being "somewhere else" as you're developing, and you can safely remove the volumes: block.

Docker container not using latest composer.json file

I'm going crazy here.
I've been working on a Dockerfile and docker-compose.yml file for my project. I recently updated my project's dependencies. When I build the project outside of a container using composer install, it builds with the correct dependencies. However, when I build the project inside a docker container, it downloads and installs the latest dependencies, but then somehow runs the application using obsolete dependencies!
First of all, this is what my Dockerfile looks like:
FROM composer
# Set the working directory within the docker container
WORKDIR /app
# Copy in the app, then install dependencies.
COPY . /app
RUN composer install
I have excluded the composer.lock file and the vendor directory in my .dockerignore:
vendor
composer.lock
Here's my docker-compose.yml:
version: "3"
services:
app:
build: .
volumes:
- app:/app
webserver:
image: richarvey/nginx-php-fpm
volumes:
- app:/var/www/html
volumes:
app:
Note that the build process occurs within the app volume. I don't think this should be part of the problem, as I run docker system prune each time, to purge all existing volumes.
This is what I do to run the container. While troubleshooting, I have been running these commands to eliminate any cached files before starting the container:
$ docker system prune
$ docker-compose build --no-cache
$ docker-compose up --force-recreate
As I watch the dependencies install and download, I can see that it is downloading and installing the right versions! So it must have the correct composer.json file at some point in the process.
Yet somehow, once the build is complete and the application starts, I get the same old warnings about obsolete dependencies, and sure enough, and the composer.json inside the container is obsolete!
So my questions are:
How TF is the composer.json file in the container obsolete?
WHERE is it getting the obsolete file from, since it no longer exists in any image or cache??
How TF is it managing to install the latest dependencies with this obsolete composer.json file, but then not using them, and in fact reverting the composer.json file and the dependencies??
I think the problem is, that you copy your local files into the app-container and run composer install on this copy. Since this will not affect your host system, your webserver, which will actually serve your project will still use the outdated local version, instead of the copy from your other image.
You could try using multi-stage builds or something like this:
COPY FROM app:latest /app /var/www/html
This will copy the artifact from your "build-container", i.e. your project with the installed dependency in app, into the actual container that is running the code, i.e. webserver. Unfortunately, I don't think this will work (well) with your setup, where you mount the volume into that location.
Well, I finally fixed this issue, although parts of my original problem still confuse me.
Here's what I learned:
The docker-compose up process goes in this order:
If an image already exists, use it, even if the Dockerfile (or files used by it) has changed. (This can be avoided with docker-compose up --build).
If there is no existing image, build the image from the Dockerfile.
Mount the volumes specified in the docker-compose file.
A huge part of my problem was that I thought that the volumes were mounted before the build process, and that my application would be installed into this volume as a result of these commands:
COPY . /app
RUN composer install
However, these files were later overwritten when the volume was mounted at the same location within the container (/app).
Now, since I was not mounting a host directory, just an ephemeral, named volume, the /app directory should have been empty. I still don't understand why it wasn't, considering I was clearing my existing Docker volumes with docker system prune before each build. Whatever.
In the end, I used #dbrumann's solution. This was simpler, did not require the use of any Docker volumes, and avoids having a live composer container after the build process has completed (this would be bad for production). My Dockerfile now looks like this:
Dockerfile:
# Install dependencies using the composer image
FROM composer AS composer
# Set the working directory within the docker container
WORKDIR /app
# Copy in the app, then install dependencies.
COPY . .
RUN composer install
# Start the nginx server
FROM richarvey/nginx-php-fpm
# Copy over files from the composer image, which is then discarded automatically
WORKDIR /var/www/html
COPY --from=composer /app .
And the new docker-compose.yml:
version: "3.7"
services:
webserver:
build: .
tty: true
ports:
- "80:80"
- "443:443"

Include .env file in `go build` command

I have the following Dockerfile which builds an image for my Go project.
FROM golang:1.11.2-alpine3.8 as go-compile
RUN apk update && apk add git
RUN mkdir /app
COPY src/ /app
WORKDIR /app
RUN go get github.com/joho/godotenv
RUN go build -o main .
FROM alpine:latest
RUN mkdir /app
COPY --from=go-compile /app/main /app/main
CMD ["/app/main"]
The image builds, but my ".env" file is not included in the Docker image.
I've tried to copy the ".env" from the src folder into the image using COPY src/.env /app/.env, but still the Go code can't find/read the file.
How can I include the ".env" file, or in fact any other non Go file?
You cannot include non-go files in the go build process. The Go tool doesn't support "embedding" arbitrary files into the final executable.
You should use go build to build your executable then, any non-go files, e.g. templates, images, config files, need to be made available to that executable. That is; the executable needs to know where the non-go files are on the filesystem of the host machine on which the go program is running, and then open and read them as needed. So forget about embeding .env into main, instead copy .env together with main to the same location from which you want to run main.
Then the issue with your dockerfile is the fact that the target host only copies the final executable file from go-compile (COPY --from=go-compile /app/main /app/main), it doesn't copy any other files that are present in the go-compile image and therefore your main app cannot access .env since they are not on the same host.
As pointed out in the comments by #mh-cbon, there do exist 3rd-party solutions for embedding non-go files into the go binary, one of which is gobuffalo/packr.
You can inject a dotenv file or single variables into your service using docker compose:
version: "3.9"
services:
backend:
...
environment:
- NODE_ENV=production
env_file:
- ./backend/.env

Artifact caching for multistage docker builds

I have a Dockerfiles like this
# build-home
FROM node:10 AS build-home
WORKDIR /usr/src/app
COPY /home/package.json /home/yarn.lock /usr/src/app/
RUN yarn install
COPY ./home ./
RUN yarn build
# build-dashboard
FROM node:10 AS build-dashboard
WORKDIR /usr/src/app
COPY /dashboard/package.json /dashboard/yarn.lock /usr/src/app/
RUN yarn install
COPY ./dashboard ./
RUN yarn build
# run
FROM nginx
EXPOSE 80
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build-home /usr/src/app/dist /usr/share/nginx/html/home
COPY --from=build-dashboard /usr/src/app/dist /usr/share/nginx/html/dashboard
Here building two react application and then artifacts of build are put in nginx. To improve build performance, I need to cache the dist folder in the build-home andbuild-dashboard build-stages.
For this i create a volume in docker-compose.yml
...
web:
container_name: web
build:
context: ./web
volumes:
- ./web-build-cache:/usr/src/app
ports:
- 80:80
depends_on:
- api
I’ve stopped at this stage because I don’t understand how to add volume created bydocker-compose first for the build-home stage, and after adding thisvolume to the build-dashboard.
Maybe i should be create a two volumes and attach each to each of build stages, but how do this?
UPDATE:
Initial build.
Home application:
Install modules: 100.91s
Build app: 39.51s
Dashboard application:
Install modules: 100.91s
Build app: 50.38s
Overall time:
real 8m14.322s
user 0m0.560s
sys 0m0.373s
Second build (without code or dependencies change):
Home application:
Install modules: Using cache
Build app: Using cache
Dashboard application:
Install modules: Using cache
Build app: Using cache
Overall time:
real 0m2.933s
user 0m0.309s
sys 0m0.427s
Third build (with small change in code in first app):
Home application:
Install modules: Using cache
Build app: 50.04s
Dashboard application:
Install modules: Using cache
Build app: Using cache
Overall time:
real 0m58.216s
user 0m0.340s
sys 0m0.445s
Initial build of home application without Docker: 89.69s
real 1m30.111s
user 2m6.148s
sys 2m17.094s
Second build of home application without Docker, the dist folder exists on disk (without code or dependencies change): 18.16s
real 0m18.594s
user 0m20.940s
sys 0m2.155s
Third build of home application without Docker, the dist folder exists on disk (with small change in code): 20.44s
real 0m20.886s
user 0m22.472s
sys 0m2.607s
In the docker-container, the third builds of the application is 2 times longer. This shows that if the result of the first build is on disk, other builds completed faster. In the docker container, all assemblies after the first are executed as long as the first, because there is no dist folder.
If you're using multi-stage builds then there's a problem with docker cache. The final image don't have layers with build steps. By using --target and --cache-from together you can save this layers and reuse them in rebuild.
You need something like
docker build \
--target build-home \
--cache-from build-home:latest \
-t build-home:latest
docker build \
--target build-dashboard \
--cache-from build-dashboard:latest \
-t build-dashboard:latest
docker build \
--cache-from build-dashboard:latest \
--cache-from build-home:latest \
-t my-image:latest \
You can find more details at
https://andrewlock.net/caching-docker-layers-on-serverless-build-hosts-with-multi-stage-builds---target,-and---cache-from/
You can't use volumes during image building, and in any case Docker already does the caching you're asking for. If you leave your Dockerfile as-is and don't try to add your code in volumes in the docker-compose.yml, you should get caching of the built Javascript files access rebuilds as you expect.
When you run docker build, Docker looks at each step in turn. If the input to the step hasn't changed, the step itself hasn't changed, and any files that are being added haven't changed, then Docker will just reuse the result of running that step previously. In your Dockerfile, if you only change the nginx config, it will skip over all of the Javascript build steps and reuse their result from the previous time around.
(The other relevant technique, which you already have, is to build applications in two steps: first copy in files like package.json and yarn.lock that name dependencies, and install dependencies; then copy in and build your application. Since the "install dependencies" step is frequently time-consuming and the dependencies change relatively infrequently, you want to encourage Docker to reuse the last build's node_modules directory.)

Resources