Conditional COPY/ADD in Dockerfile? - docker

Inside of my Dockerfiles I would like to COPY a file into my image if it exists, the requirements.txt file for pip seems like a good candidate but how would this be achieved?
COPY (requirements.txt if test -e requirements.txt; fi) /destination
...
RUN if test -e requirements.txt; then pip install -r requirements.txt; fi
or
if test -e requirements.txt; then
COPY requiements.txt /destination;
fi
RUN if test -e requirements.txt; then pip install -r requirements.txt; fi

Here is a simple workaround:
COPY foo file-which-may-exist* /target
Make sure foo exists, since COPY needs at least one valid source.
If file-which-may-exist is present, it will also be copied.
NOTE:
You should take care to ensure that your wildcard doesn't pick up other files which you don't intend to copy. To be more careful, you could use file-which-may-exist? instead (? matches just a single character).
Or even better, use a character class like this to ensure that only one file can be matched:
COPY foo file-which-may-exis[t] /target

As stated by this comment, Santhosh Hirekerur's answer still copies the file, to achieve a true conditional copy, you can use this method.
ARG BUILD_ENV=copy
FROM alpine as build_copy
ONBUILD COPY file /file
FROM alpine as build_no_copy
ONBUILD RUN echo "I don't copy"
FROM build_${BUILD_ENV}
# other stuff
The ONBUILD instructions ensures that the file is only copied if the "branch" is selected by the BUILD_ENV. Set this var using a little script before calling docker build

This isn't currently supported (as I suspect it would lead to a non-reproducible image, since the same Dockerfile would copy or not the file, depending on its existence).
This is still requested, in issue 13045, using wildcards: "COPY foo/* bar/" not work if no file in foo" (May 2015).
It won't be implemented for now (July 2015) in Docker, but another build tool like bocker could support this.
2021:
COPY source/. /source/ works for me (i.e. copies directory when empty or not, as in "Copy directory into docker build no matter if empty or not - fails on "COPY failed: no source files were specified"")
2022
Here is my suggestion:
# syntax=docker/dockerfile:1.2
RUN --mount=type=bind,source=jars,target=/build/jars \
find /build/jars -type f -name '*.jar' -maxdepth 1 -print0 \
| xargs -0 --no-run-if-empty --replace=source cp --force source >"${INSTALL_PATH}/modules/"
That works around:
COPY jars/*.jar "${INSTALL_PATH}/modules/"
But copies no *.jar if none is found, without throwing an error.

I think I came up with a valid workaround with this Dockerfile
FROM alpine
COPy always_exist_on_host.txt .
COPY *sometimes_exist_on_host.txt .
The always_exist_on_host.txt file will always be copied to the image and the build won't fail to COPY the sometimes_exist_on_host.txt file when it doesn't exist. Furthermore, it will COPY the sometimes_exist_on_host.txt file when it does exist.
For example:
.
├── Dockerfile
└── always_exist_on_host.txt
build succeeds
docker build . -t copy-when-exists --no-cache
[+] Building 1.0s (7/7) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 36B 0.0s
=> [internal] load metadata for docker.io/library/alpine:latest 1.0s
=> [internal] load build context 0.0s
=> => transferring context: 43B 0.0s
=> CACHED [1/2] FROM docker.io/library/alpine#sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a 0.0s
=> [2/2] COPY always_exist_on_host.txt *sometimes_exist_on_host.txt . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:e7d02c6d977f43500dbc1c99d31e0a0100bb2a6e5301d8cd46a19390368f4899 0.0s
.
├── Dockerfile
├── always_exist_on_host.txt
└── sometimes_exist_on_host.txt
build still succeeds
docker build . -t copy-when-exists --no-cache
[+] Building 1.0s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 36B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:latest 0.9s
=> [internal] load build context 0.0s
=> => transferring context: 91B 0.0s
=> CACHED [1/2] FROM docker.io/library/alpine#sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a 0.0s
=> [2/2] COPY always_exist_on_host.txt *sometimes_exist_on_host.txt . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:4c88e2ffa77ebf6869af3c7ca2a0cfb9461979461fc3ae133709080b5abee8ff 0.0s
=> => naming to docker.io/library/copy-when-exists 0.0s

Work around Solution
I had requirement on copy FOLDER to server based on ENV Variables. I took the empty server image. created required deployment folder structure at in local folder. then added below line to DockerFile copy the folder to container. In last line added entry point to execute init file.sh before docker start the server.
#below lines added to integrate testing framework
RUN mkdir /mnt/conf_folder
ADD install /mnt/conf_folder/install
ADD install_test /mnt/conf_folder/install_test
ADD custom-init.sh /usr/local/bin/custom-init.sh
ENTRYPOINT ["/usr/local/bin/custom-init.sh"]
Then create the custom-init.sh file in local with script something like below
#!/bin/bash
if [ "${BUILD_EVN}" = "TEST" ]; then
cp -avr /mnt/conf_folder/install_test/* /mnt/wso2das-3.1.0/
else
cp -avr /mnt/conf_folder/install/* /mnt/wso2das-3.1.0/
fi;
In docker-compose file below lines.
environment:
- BUILD_EVN=TEST
These changes copy folder to container during docker build. when we execute docker-compose up it copy or deploy the actual required folder to server before server starts.

Copy all files to a throwaway dir, hand pick the one you want, discard the rest.
COPY . /throwaway
RUN cp /throwaway/requirements.txt . || echo 'requirements.txt does not exist'
RUN rm -rf /throwaway
You can achieve something similar using build stages, which relies on the same solution, using cp to conditionally copy. By using a build stage, your final image will not include all the content from the initial COPY.
FROM alpine as copy_stage
COPY . .
RUN mkdir /dir_for_maybe_requirements_file
RUN cp requirements.txt /dir_for_maybe_requirements_file &>- || true
FROM alpine
# Must copy a file which exists, so copy a directory with maybe one file
COPY --from=copy_stage /dir_for_maybe_requirements_file /
RUN cp /dir_for_maybe_requirements_file/* . &>- || true
CMD sh

Tried the other ideas, but none met our requirement. The idea is to create base nginx image for child static web applications. For security, optimization, and standardization reasons, the base image must be able to RUN commands on directories added by child images. The base image does not control which directories are added by child images. Assumption is child images will COPY resources somewhere under COMMON_DEST_ROOT.
This approach is a hack, but the idea is base image will support COPY instruction for 1 to N directories added by child image. ARG PLACEHOLDER_FILE and ENV UNPROVIDED_DEST are used to satisfy <src> and <dest> requirements for any COPY instruction not needed.
#
# base-image:01
#
FROM nginx:1.17.3-alpine
ENV UNPROVIDED_DEST=/unprovided
ENV COMMON_DEST_ROOT=/usr/share/nginx/html
ONBUILD ARG PLACEHOLDER_FILE
ONBUILD ARG SRC_1
ONBUILD ARG DEST_1
ONBUILD ARG SRC_2
ONBUILD ARG DEST_2
ONBUILD ENV SRC_1=${SRC_1:-PLACEHOLDER_FILE}
ONBUILD ENV DEST_1=${DEST_1:-${UNPROVIDED_DEST}}
ONBUILD ENV SRC_2=${SRC_2:-PLACEHOLDER_FILE}
ONBUILD ENV DEST_2=${DEST_2:-${UNPROVIDED_DEST}}
ONBUILD COPY ${SRC_1} ${DEST_1}
ONBUILD COPY ${SRC_2} ${DEST_2}
ONBUILD RUN sh -x \
#
# perform operations on COMMON_DEST_ROOT
#
&& chown -R limited:limited ${COMMON_DEST_ROOT} \
#
# remove the unprovided dest
#
&& rm -rf ${UNPROVIDED_DEST}
#
# child image
#
ARG PLACEHOLDER_FILE=dummy_placeholder.txt
ARG SRC_1=app/html
ARG DEST_1=/usr/share/nginx/html/myapp
FROM base-image:01
This solution has obvious shortcomings like the dummy PLACEHOLDER_FILE and hard-coded number of COPY instructions that are supported. Also there is no way to get rid of the ENV variables that are used in the COPY instruction.

I have other workarounds for the same. The idea is to touch the file in the build context and use the copy statement inside the Dockerfile. If the file exists it will just create an empty file and the docker build will not fail. If there is already a file it will just change the time stamp.
touch requirements.txt
and for Dockerfile
FROM python:3.9
COPY requirements.txt .

Related

dockerfile cannot build: CONDA env create

Hi there I'm new to Docker and Dockerfiles in general,
However, I will need to create one in order to load an application on a server using WDL. With that said, there are few important aspects of this Dockerfile:
requires to create a Conda environment
in there I have to install Snakemake (through Mamba)
finally, I will need to git clone a repository and follow the steps to generate an executable for the application, later invoked by Snakemake
Luckily, it seems most of the pieces are already on dockerhub; correct if I'm wrong based on the script (see below)
# getting ubuntu base image & anaconda3 loaded
2 FROM ubuntu:latest
3 FROM continuumio/anaconda3:2021.05
4 FROM condaforge/mambaforge:latest
5 FROM snakemake/snakemake:stable
6
7 FROM node:alpine
8 RUN apk add --no-cache git
9 RUN apk add --no-cache openssh
10
11 MAINTAINER Name <email>
12
13 WORKDIR /home/xxx/Desktop/Pangenie
14
15 ## ACTUAL PanGenIe INSTALLATION
16 RUN git clone https://github.com/eblerjana/pangenie.git /home/xxx/Desktop/Pangenie
17 # create the environment
18 RUN conda env create -f environment.yml
19 # build the executable
20 RUN conda activate pangenie
21 RUN mkdir build; cd build; cmake .. ; make
First, I think that loading also Mamba and Snakemake would allow me to simply launch the application, as the tools are already set-up by the Dockerfile. Then, I ideally would like to build from the repository the executable, still I get an error at line 18 when I try to create a Conda environment, this is what I get:
[+] Building 1.7s (10/10) FINISHED
[internal] load build definition from Dockerfile
0.1s => => transferring dockerfile: 708B 0.1s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.1s => [internal] load metadata for docker.io/library/node:alpine 1.4s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [stage-4 1/6] FROM docker.io/library/node:alpine#sha256:1a04e2ec39cc0c3a9657c1d6f8291ea2f5ccadf6ef4521dec946e522833e87ea
0.0s => CACHED [stage-4 2/6] RUN apk add --no-cache git 0.0s => CACHED [stage-4 3/6] RUN apk add --no-cache openssh 0.0s => CACHED [stage-4 4/6] WORKDIR /home/mat/Desktop/Pangenie 0.0s => CACHED [stage-4 5/6] RUN git clone https://github.com/eblerjana/pangenie.git /home/mat/Desktop/Pangenie
0.0s => ERROR [stage-4 6/6] RUN conda env create -f environment.yml 0.1s
[stage-4 6/6] RUN conda env create -f environment.yml:
#10 0.125 /bin/sh: conda: not found executor failed running [/bin/sh -c conda env create -f environment.yml]: exit code: 127
Now, I'm not really experienced as I said, and I spent some time looking for a solution and tried different things, but nothing worked out... if anyone has an idea or even suggesions on how to fix this Dockerfile, please let me know.
Thanks in advance!

Dockerfile cannot run a container using "docker-compose-up --build" command

Dockerfile cannot run a container using "docker-compose-up --build" command
When I run Dockerfile using the "docker-compose up --build" command, the file not found is output, and the container is not running.
Dockerfile, docker-compose.yaml, directory and result is below.
Docker version :
\server>docker --version
Docker version 20.10.14, build a224086
Dockerfile :
FROM openjdk:14-jdk-alpine3.10
RUN mkdir -p /app/workspace/config && \
mkdir -p /app/workspace/lib && \
mkdir -p /app/workspace/bin
WORKDIR /app/workspace
VOLUME /app/workspace
COPY ./bin ./bin
COPY ./config ./config
COPY ./lib ./lib
RUN chmod 774 /app/workspace/bin/*.sh
EXPOSE 6969
WORKDIR /app/workspace/bin
ENTRYPOINT ./startServer.sh
docker-compose.yaml:
version: '3'
services:
server:
container_name: cn-server
build:
context: ./server/
dockerfile: Dockerfile
ports:
- "6969:6969"
volumes:
- ${SERVER_HOST_DIR}:/app/workspace
networks:
- backend
networks:
backend:
driver: bridge
directories :
enter image description here
"docker-compose up --build" command execution result :
Building server
[+] Building 3.7s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 425B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:14-jdk-alpine3.10 2.0s
=> [internal] load build context 0.0s
=> => transferring context: 239B 0.0s
=> CACHED [1/8] FROM docker.io/library/openjdk:14-jdk-alpine3.10#sha256:b8082268ef46d44ec70fd5a64c71d445492941813ba9d68049be6e63a0da542f 0.0s
=> [2/8] RUN mkdir -p /app/workspace/config && mkdir -p /app/workspace/lib && mkdir -p /app/workspace/bin 0.4s
=> [3/8] WORKDIR /app/workspace 0.1s
=> [4/8] COPY ./bin ./bin 0.1s
=> [5/8] COPY ./config ./config 0.1s
=> [6/8] COPY ./lib ./lib 0.1s
=> [7/8] RUN chmod 774 /app/workspace/bin/*.sh 0.5s
=> [8/8] WORKDIR /app/workspace/bin 0.1s
=> exporting to image 0.2s
=> => exporting layers 0.2s
=> => writing image sha256:984554c9d7d9b3312fbe2dc76b4c7381e93cebca3a808ca16bd9e3777d42f919 0.0s
=> => naming to docker.io/library/docker_cn-server 0.0s
Creating cn-server ... done
Attaching to cn-server
cn-server | /bin/sh: ./startServer.sh: not found
cn-server exited with code 127
Also bin, config, lib directories are not created in host volume directory and no files.
Please tell me what I was wrong or what I used wrong.
Thank you.
There are two obvious problems here, both related to Docker volumes.
In your Dockerfile, you switch to WORKDIR /app/workspace and do some work there, but then in the Compose setup, you bind-mount a host directory over all of /app/workspace. This causes all of the work in the Dockerfile to be lost, and replaces the code in the image with unpredictable content from the host. In the docker-compose.yml file you should delete the volumes: block. You should be able to reduce what you've shown to as little as
version: '3.8'
services:
server:
build: ./server
ports:
- '6969:6969'
The second problem is in the Dockerfile itself. You declare VOLUME /app/workspace fairly early on. This is unnecessary, though, and its most visible effect is to cause later RUN commands in that directory to have no effect. So in particular your RUN chmod ... command isn't happening. Deleting the VOLUME line can help with that. (You also don't need to RUN mkdir directories you're about to COPY into the image.)
FROM openjdk:14-jdk-alpine3.10
WORKDIR /app/workspace
COPY ./bin ./bin
COPY ./config ./config
COPY ./lib ./lib
RUN chmod 0755 bin/*.sh
EXPOSE 6969
WORKDIR /app/workspace/bin
CMD ["./startServer.sh"]
There are other potential problems depending on the content of the startServer.sh file. I'd expect this to be a shell script and its first line to be a "shebang" line, #!/bin/sh. If it explicitly names GNU Bash or another shell, that won't be present in an Alpine-based image. If you're working on a Windows-based system and the file has DOS line endings, that will also cause an error.

WORKDIR $HOME cannot normalize nothing

I'm new to docker. I've a very simple Dockerfile but when I try to build an image from this Dockerfile, i get "cannot normalize nothing" error.
FROM alpine:latest
RUN apk add vim
CMD ["bash"]
WORKDIR $HOME
For building the image, I use the following command:
$ docker build -t aamirglb/test:1.0 .
I used Play With Docker (PWD) to test this script and here is the output.
pwd output
If I comment out WORKDIR $HOME line, the build is successful.
Any help in understanding this error will be highly appreciated.
Thanks in advance.
For those that can't reproduce, this error does not happen with buildkit:
$ docker build -t test-workdir -f df.workdir-empty .
[+] Building 0.2s (6/6) FINISHED
=> [internal] load build definition from df.workdir-empty 0.0s
=> => transferring dockerfile: 43B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/alpine:latest 0.0s
=> [1/3] FROM docker.io/library/alpine:latest 0.0s
=> CACHED [2/3] RUN apk add vim 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:cadafef9a77c95b1b32499bcc1beba016ff0a7071710a1c37eb4e5f32e5d1c94 0.0s
=> => naming to docker.io/library/test-workdir 0.0s
But if you build with the classic builder, the error does appear:
$ DOCKER_BUILDKIT=0 docker build -t test-workdir -f df.workdir-empty .
Sending build context to Docker daemon 23.04kB
Step 1/4 : FROM alpine:latest
---> 14119a10abf4
Step 2/4 : RUN apk add vim
---> Running in a286e7f3107a
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/5) Installing xxd (8.2.4708-r0)
(2/5) Installing lua5.3-libs (5.3.6-r0)
(3/5) Installing ncurses-terminfo-base (6.2_p20210612-r0)
(4/5) Installing ncurses-libs (6.2_p20210612-r0)
(5/5) Installing vim (8.2.4708-r0)
Executing busybox-1.33.1-r3.trigger
OK: 26 MiB in 19 packages
Removing intermediate container a286e7f3107a
---> acdd6e1963db
Step 3/4 : CMD ["bash"]
---> Running in 6deb306db96a
Removing intermediate container 6deb306db96a
---> 960f0de2f376
Step 4/4 : WORKDIR $HOME
cannot normalize nothing
And before you get your hopes up, workdir in the buildkit example is not set to the user's home directory:
$ docker inspect test-workdir:latest --format '{{.Config.WorkingDir}}'
/
So first, what does cannot normalize nothing mean? Normalizing is the process of taking a path like /.//some/../path and turning that into /path, cleaning the string of unnecessary content and converting it into a uniform path. That process doesn't work on nothing, it needs a string or value, or at least the classic builder does (buildkit seems to have a sensible default).
Why doesn't $HOME default to /root? Because that variable was never set in docker. The only environment variable defined in there is a path:
$ docker inspect test-workdir:latest --format '{{.Config.Env}}'
[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin]
When you launch a shell like /bin/sh in the container, that shell is what is defining $HOME and various other variables:
$ docker run -it --rm test-workdir /bin/sh
/ # env
HOSTNAME=c68d4370e413
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
Since there was no ENV or ARG to set $HOME, you cannot use it in the WORKDIR step.

Docker Go image: starting container process caused: exec: "app": executable file not found in $PATH: unknown

I have been reading a lot of similar issues on different languages, none of them are Go.
I just created a Dockerfile with the instructions I followed on official Docker hub page:
FROM golang:1.17.3
WORKDIR /go/src/app
COPY . .
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
This is my folder structure:
users-service
|-> .gitignore
|-> Dockerfile
|-> go.mod
|-> main.go
|-> README.md
If anyone needs to see some code, this is how my main.go looks like:
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
I ran docker build -t users-service .:
$ docker build -t users-service .
[+] Building 5.5s (11/11) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 154B 0.1s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/golang:1.17.3 3.3s
=> [auth] library/golang:pull token for registry-1.docker.io 0.0s
=> [1/5] FROM docker.io/library/golang:1.17.3#sha256:6556ce40115451e40d6afbc12658567906c9250b0fda250302dffbee9d529987 0.3s
=> [internal] load build context 0.1s
=> => transferring context: 2.05kB 0.0s
=> [2/5] WORKDIR /go/src/app 0.1s
=> [3/5] COPY . . 0.1s
=> [4/5] RUN go get -d -v ./... 0.6s
=> [5/5] RUN go install -v ./... 0.7s
=> exporting to image 0.2s
=> => exporting layers 0.1s
=> => writing image sha256:1f0e97ed123b079f80eb259dh3e34c90a48bf93e8f55629d05044fec8bfcaca6 0.0s
=> => naming to docker.io/library/users-service 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Then I ran docker run users-service but I get that error:
$ docker run users-service
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "app": executable file not found in $PATH: unknown.
I remember I had some troubles with GOPATH environment variable in Visual Studio Code on Windows, maybe it's related... Any sugguestions?
The official Docker documentation has useful instructions for building a Go image: https://docs.docker.com/language/golang/build-images/
In summary, you need to build your Go binary and you need to configure the CMD appropriately, e.g.:
FROM golang:1.17.3
WORKDIR /app
COPY main.go .
COPY go.mod ./
RUN go build -o /my-go-app
CMD ["/my-go-app"]
Build the container:
$ docker build -t users-service .
Run the docker container:
$ docker run --rm -it users-service
Hello, World!
Your "app" executable binary should be available in your $PATH to call globally without any path prefix. Otherwise, you have to supply your full path to your executable like CMD ["/my/app"]
Also, I recommend using an ENTRYPOINT instruction. ENTRYPOINT indicates the direct path to the executable, while CMD indicates arguments supplied to the ENTRYPOINT.
Using combined RUN instructions make your image layers minimal, your overall image size becomes little bit smaller compared to using multiple RUNs.

How do I use java 11 with Docker?

In my DockerFile I have my FROM line like so :
FROM openjdk:11-jdk-alpine as build
Previously it was on java 8 and everything was working fine.
Now I get this error :
C:\dev\shape-shop-back-end>docker build .
[+] Building 2.0s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 1.31kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> CANCELED [internal] load metadata for docker.io/library/openjdk:11-jre-alpine 1.8s
=> ERROR [internal] load metadata for docker.io/library/openjdk:11-jdk-alpine 1.8s
=> [auth] library/openjdk:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/openjdk:11-jdk-alpine:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: docker.io/library/openjdk:11-jdk-alpine: not found
My docker file :
#### Stage 1: Build the application
FROM openjdk:11-jdk-alpine as build
# Set the current working directory inside the image
WORKDIR /app
# Copy maven executable to the image
COPY mvnw .
COPY .mvn .mvn
# Copy the pom.xml file
COPY pom.xml .
# Build all the dependencies in preparation to go offline.
# This is a separate step so the dependencies will be cached unless
# the pom.xml file has changed.
RUN ./mvnw dependency:go-offline -B
# Copy the project source
COPY src src
# Package the application
RUN ./mvnw package -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
#### Stage 2: A minimal docker image with command to run the app
FROM openjdk:11-jre-alpine
ARG DEPENDENCY=/app/target/dependency
# Copy project dependencies from the build stage
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.shapeshop.App"]
#ARG JAR_FILE=target/*.jar
#COPY ${JAR_FILE} app.jar
#ENTRYPOINT ["java","-jar","/app.jar"]
#
## Expose port 80 to the Docker host, so we can access it
## from the outside.
#EXPOSE 8080
Am I missing something?
If I do a FROM without the "alpine" like so :
FROM openjdk:11 as build
then it worked for me
This may help:
https://hub.docker.com/_/openjdk
"The OpenJDK port for Alpine is not in a supported release by OpenJDK, since it is not in the mainline code base. It is only available as early access builds of OpenJDK Project Portola. See also this comment. So this image follows what is available from the OpenJDK project's maintainers.
What this means is that Alpine based images are only released for early access release versions of OpenJDK. Once a particular release becomes a "General-Availability" release, the Alpine version is dropped from the "Supported Tags"; they are still available to pull, but will no longer be updated."
I don't use java images, this may suffice instead:
https://hub.docker.com/r/adoptopenjdk/openjdk11/

Resources