I have a multistage Docker file like this:
From A as a
WORKDIR /app
COPY . .
From B as b
COPY --from=a /app/path/to/a/file /destination/file/path
Sometimes the source-file of last COPY does not exist and this causes a failure in docker build. Is it possbile to do the magic w/o worrying about the existence of /app/path/to/a/file?
I don't suppose there is a way to allow for a failure in a COPY instruction, but what you could do is copy not a specific file, but the content of the folder that does exist (even if it's empty).
For example, imagine you have a folder structure like this
.
├── Dockerfile
└── test_folder
└── test_file
1 directory, 2 files
and you build from this Dockerfile:
FROM alpine as alp1
WORKDIR /app
COPY . .
FROM alpine
RUN mkdir /dest_folder
COPY --from=alp1 /app/test_folder/test_file /dest_folder
if works because the file does exist and if we were to delete the file from that folder, the COPY would fail.
So instead of copying /app/test_folder/test_file we could copy /app/test_folder/, which will copy EVERYTHING from inside the test_folder to /dest_folder in the second container, EVEN if there is nothing:
file removed:
.
├── Dockerfile
└── test_folder
1 directory, 1 file
building from:
FROM alpine as alp1
WORKDIR /app
COPY . .
FROM alpine
RUN mkdir /dest_folder
COPY --from=alp1 /app/test_folder/ /dest_folder
> docker build -t test .
Sending build context to Docker daemon 2.56kB
Step 1/6 : FROM alpine as alp1
---> 0a97eee8041e
Step 2/6 : WORKDIR /app
---> Using cache
---> 0a6fc3a90e15
Step 3/6 : COPY . .
---> Using cache
---> 076efaa3a8b9
Step 4/6 : FROM alpine
---> 0a97eee8041e
Step 5/6 : RUN mkdir /dest_folder
---> Using cache
---> 8d647b9a1573
Step 6/6 : COPY --from=alp1 /app/test_folder/ /dest_folder
---> Using cache
---> 361b0919c585
Successfully built 361b0919c585
Successfully tagged test:latest
dest_folder exists:
docker run -it --rm test ls
bin etc media proc sbin tmp
dest_folder home mnt root srv usr
dev lib opt run sys var
but nothing is inside
docker run -it --rm test ls -lah /dest_folder
total 8K
drwxr-xr-x 1 root root 4.0K Nov 24 14:06 .
drwxr-xr-x 1 root root 4.0K Nov 24 14:13 ..
Related
I have a dockerfile that's copying a binary from a previous step. The entrypoint/command does not work however. I can see the file exists though.
FROM rust as rust-builder
WORKDIR /app
COPY Cargo.lock Cargo.lock
COPY Cargo.toml Cargo.toml
COPY src/ src
RUN cargo build --release
FROM alpine:latest
COPY --from=rust-builder /app/target/release/kube-logs-generator /app/kube-logs-generator
RUN ls -la /app
CMD ["/app/kube-logs-generator"]
This is the output:
Sending build context to Docker daemon 2.159GB
Step 1/10 : FROM rust as rust-builder
---> dd3f19acb681
Step 2/10 : WORKDIR /app
---> Using cache
---> 5e0281f74323
Step 3/10 : COPY Cargo.lock Cargo.lock
---> Using cache
---> 060b7d6f8349
Step 4/10 : COPY Cargo.toml Cargo.toml
---> Using cache
---> 56b90bed67d5
Step 5/10 : COPY src/ src
---> Using cache
---> cdfa52607837
Step 6/10 : RUN cargo build --release
---> Using cache
---> 2f13c20bbebe
Step 7/10 : FROM alpine:latest
---> 9c6f07244728
Step 8/10 : COPY --from=rust-builder /app/target/release/kube-logs-generator /app/kube-logs-generator
---> Using cache
---> b2158ebfac6f
Step 9/10 : RUN ls -la /app
---> Running in cda38e0f4ff0
total 8056
drwxr-xr-x 1 root root 38 Oct 28 14:07 .
drwxr-xr-x 1 root root 140 Oct 28 14:14 ..
-rwxr-xr-x 1 root root 8245984 Oct 28 14:03 kube-logs-generator
Removing intermediate container cda38e0f4ff0
---> 6bf0803d98d6
Step 10/10 : CMD ["/app/kube-logs-generator"]
---> Running in 99ba34dd6afa
Removing intermediate container 99ba34dd6afa
---> c616fa4a1d55
Successfully built c616fa4a1d55
Successfully tagged bkauffman7/kube-logs-generator:latest
Then running
docker run bkauffman7/kube-logs-generator
exec /app/kube-logs-generator: no such file or directory
I'm not sure why it thinks the file isn't there.
Your final application stage is using an Alpine-based image, but your builder isn't using an Alpine base. There are some incompatibilities in the base C library stack that can cause "no such file or directory" errors, even when the file plainly exists.
The easiest way to work around this is to use a Debian- or Ubuntu-based image for the final stage. This is somewhat larger, but it includes the standard GNU libc.
FROM rust:bullseye as rust-builder
...
FROM debian:bullseye
COPY --from=rust-builder /app/target/release/kube-logs-generator /app/kube-logs-generator
The Docker Hub rust image includes an Alpine variant, and it also may work to use an Alpine-based image as the builder
FROM rust:alpine3.16 AS rust-builder
...
FROM alpine:3.16
In both cases, I've made sure that both the Linux distribution and their major versions match between the builder and runtime stages.
I try to add to docker file, i simplified to very short one, that show the issue.
Simple Dockerfile - ddtest
FROM maven AS testdocker
COPY . /server
WORKDIR /server/admin
RUN mkdir target; echo "hello world" > ./target/test.txt
RUN pwd
RUN ls -la ./target/
COPY ./target/test.txt /test.txt
CMD ["/usr/bin/java", "-jar", "/server.jar"]
Building with command docker build . -f ddtest
Execution log:
docker build . -f ddtest
Sending build context to Docker daemon 245.8kB
Step 1/8 : FROM maven AS testdocker
---> e85864b4079a
Step 2/8 : COPY . /server
---> e6c9c6d55be1
Step 3/8 : WORKDIR /server/admin
---> Running in d30bab5d6b6b
Removing intermediate container d30bab5d6b6b
---> 7409cbc70fac
Step 4/8 : RUN mkdir target; echo "hello world" > ./target/test.txt
---> Running in ad507dfc604b
Removing intermediate container ad507dfc604b
---> 0d69df30d041
Step 5/8 : RUN pwd
---> Running in 72becb9ae3ba
/server/admin
Removing intermediate container 72becb9ae3ba
---> 7bde9ccae4c6
Step 6/8 : RUN ls -la ./target/
---> Running in ceb5c222f3c0
total 12
drwxr-xr-x 2 root root 4096 Aug 9 05:50 .
drwxr-xr-x 1 root root 4096 Aug 9 05:50 ..
-rw-r--r-- 1 root root 12 Aug 9 05:50 test.txt
Removing intermediate container ceb5c222f3c0
---> 3b4dbcb794ad
Step 7/8 : COPY ./target/test.txt /test.txt
COPY failed: stat /var/lib/docker/tmp/docker-builder015566471/target/test.txt: no such file or directory
copy to destination docker test.txt failed, why?
That ls command you invoked is running inside an intermediate container, which is not your host. So according to your WORKDIR you're listing files inside /server/admin/target of your container.
As the output of ls shows, this test.txt file is already inside your image
COPY src dst command copies a file/directory from your host to your image, and it seems that you don't have any file named test.txt inside ./target directory of your host.
When i try to build the Dockerfile, it copies the sample.pdf from documents folder. But the pdf file doesnt exist in container when i run it.
Step 6/9 : COPY . .
---> b0c137c4b5bb
Step 7/9 : COPY documents/ /usr/src/app/documents/
---> 77ac91c3ebb9
Step 8/9 : RUN ls -la /usr/src/app/documents/*
---> Running in 03c9f14669c3
-rw-rw-rw- 1 root root 2830 May 3 14:30 /usr/src/app/documents/sample.pdf
Removing intermediate container 03c9f14669c3
After running the docker-compose image.
The image doesnt exist in container.
sudo docker exec -it test_consumer_1 ls /usr/src/app/documents
//[None] - it should show sample.pdf
Dockerfile:
FROM python:3.6-alpine
COPY requirements.txt /usr/src/app/requirements.txt
WORKDIR /usr/src/app
RUN pip install -r requirements.txt
# Without this setting, Python never prints anything out.
ENV PYTHONUNBUFFERED=1
COPY . .
COPY documents/ /usr/src/app/documents/
RUN ls -la /usr/src/app/documents/*
CMD ["python", "receive.py"]
So I am pretty new to docker and it's setup but I have been messing around with this dockerfile for about half an hour trying to get a working image together for an angular 6 application that I have.
Aside from the messy dockerfile the last bit doesn't seem to be working.
So I am working inside of the /workdir directory for all this stuff, building my angular app, etc. Afterwards I move out to the root directory and then want to copy the /workdir/dist directory to the root /dist folder and then delete the working directory. I get a file or directory not found at the copy line. However if I run ls I see a workdir exists and if I run ls ./workdir/dist I see that there are files in it.
My linux scripting and docker understanding is very limited but I cannot see why this seems to fail.
FROM node:8
ENV ExposedPort 80
WORKDIR /workdir
COPY . /workdir
RUN npm install
RUN npm run production-angular
COPY package.json dist
COPY index.js dist
WORKDIR /
RUN ls
RUN ls workdir
RUN ls workdir/dist
COPY workdir/dist dist
CMD node index.js
EXPOSE ${ExposedPort}
My last few lines from the docker build command:
Step 9/15 : WORKDIR /
Removing intermediate container 10f69d248a51
---> d6184c6f0ceb
Step 10/15 : RUN ls
---> Running in c30b23783655
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
workdir
Removing intermediate container c30b23783655
---> fb74727468f6
Step 11/15 : RUN ls workdir
---> Running in e6bae18e3560
README.md
angular.json
dist
dockerfile
e2e
index.js
node_modules
package-lock.json
package.json
src
tsconfig.json
tslint.json
Removing intermediate container e6bae18e3560
---> c3e1f9f77d00
Step 12/15 : RUN ls workdir/dist
---> Running in 07923b11226e
3rdpartylicenses.txt
favicon.ico
index.html
index.js
main.1c74cb5a2b3da54e1148.js
package.json
polyfills.479875dfe4ba221bc297.js
runtime.a66f828dca56eeb90e02.js
styles.34c57ab7888ec1573f9c.css
Removing intermediate container 07923b11226e
---> 66ef563ba292
Step 13/15 : COPY workdir/dist dist
COPY failed: stat /var/lib/docker/tmp/docker-builder326636018/workdir/dist: no such file or directory
Change COPY ./workdir to COPY workdir
https://github.com/docker/for-linux/issues/90#issuecomment-326045261
Paths in a Dockerfile are always relative to the the context directory. The context directory is the positional argument passed to docker build (often .).
If there is no ./tmp in the context directory then this is the expected behaviour.
I'm trying to build a stack with one docker-compose that should contain another containers inside. This is to run a development environment with all my projects inside.
So the problem is the volume with application source isn't appearing on built image.
MacOS Sierra
Docker version 17.03.0-ce, build 60ccb22
Boot2Docker-cli version: v1.8.0
my directory tree
/dockers <======= one directory with all docker files for each project
docker-compose.yml <======= The main image
/project1 <======= dockerfile for each project
Dockerfile
/project2
Dockerfile
/project3
Dockerfile
/project1 <======= project1 source folder
test.txt
/project2
/project3
my docker-compose.yml
project1:
build: ./project1
volumes:
- ../project1/:/src
my dockerfile for project1
FROM python:2.7
RUN mkdir -p /src
WORKDIR /src
RUN echo "---------------------"
RUN ls -la
RUN echo "---------------------"
So I try to build the docker-compose file
$ sudo docker-compose build --no-cache
And then it shows an empty folder when I expect test.txt file
Building express
ERROR: Cannot locate specified Dockerfile: Dockerfile
➜ docker git:(master) ✗ sudo docker-compose build --no-cache
Building project1
Step 1/7 : FROM python:2.7
---> ca388cdb5ac1
Step 2/7 : RUN mkdir -p /src
---> Running in 393a462f7a44
---> 4fbeb32d88b3
Removing intermediate container 393a462f7a44
Step 3/7 : WORKDIR /src
---> 03ce193577ab
Removing intermediate container b1cd746b699a
Step 4/7 : RUN echo "--------------------------"
---> Running in 82df8a512c90
----------------------------
---> 6dea58ba5051
Removing intermediate container 82df8a512c90
Step 5/7 : RUN ls -la
---> Running in 905417d0cd19
total 8
drwxr-xr-x 2 root root 4096 Mar 23 17:12 . <====== EMPTY :(
drwxr-xr-x 1 root root 4096 Mar 23 17:12 .. <====== EMPTY :(
---> 53764caffb1a
Removing intermediate container 905417d0cd19
Step 6/7 : RUN echo "-----------------------------"
---> Running in 110e765d102a
----------------------------
---> b752230fd6dc
Removing intermediate container 110e765d102a
Step 7/7 : EXPOSE 3000
---> Running in 1cfe2e80d282
---> 5e3e740d5a9a
Removing intermediate container 1cfe2e80d282
Successfully built 5e3e740d5a9a
Volumes are runtime configurations in Docker. Because they are configurable, if you were to reference volumes during the build phase you would essentially be creating a potentially uncheckable broken dependency.
I'm sure there is a more technical reason - but it really shouldn't be done. Move all that stuff to the runtime setup command and you should be OK.