Docker build failed to copy a file - docker

Hi I am new to Docker and trying to wrap around my head on how to clone a private repo from github and found some interesting link issues/6396
I followed one of the post and my dockerfile looks like
FROM python:2.7 as builder
# Deploy app's code
#RUN set -x
RUN mkdir /code
RUN mkdir /root/.ssh/
RUN ls -l /root/.ssh/
# The GITHUB_SSH_KEY Build Argument must be a path or URL
# If it's a path, it MUST be in the docker build dir, and NOT in .dockerignore!
ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
RUN echo "${SSH_PRIVATE_KEY}"
# Set up root user SSH access for GitHub
ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
RUN ssh -o StrictHostKeyChecking=no -vT git#github.com 2>&1 | grep -i auth
# Test SSH access (this returns false even when successful, but prints results)
RUN git clone git#github.com:***********.git
COPY . /code
WORKDIR /code
ENV PYTHONPATH /datawarehouse_process
# Setup app's virtualenv
RUN set -x \
&& pip install tox \
&& tox -e luigi
WORKDIR /datawarehouse_process
# Finally, remove the $GITHUB_SSH_KEY if it was a file, so it's not in /app!
# It can also be removed from /root/.ssh/id_rsa, but you're probably not
going
# to COPY that directory into the runtime image.
RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*
#FROM python:2.7 as runtime
#COPY --from=builder /code /code
When I run docker build . from the correct location I get this error below. Any clue will be appreciated.
c:\Domain\Project\Docker-Images\datawarehouse_process>docker build .
Sending build context to Docker daemon 281.7MB
Step 1/15 : FROM python:2.7 as builder
---> 43c5f3ee0928
Step 2/15 : RUN mkdir /code
---> Running in 841fadc29641
Removing intermediate container 841fadc29641
---> 69fdbcd34f12
Step 3/15 : RUN mkdir /root/.ssh/
---> Running in 50199b0eb002
Removing intermediate container 50199b0eb002
---> 6dac8b120438
Step 4/15 : RUN ls -l /root/.ssh/
---> Running in e15040402b79
total 0
Removing intermediate container e15040402b79
---> 65519edac99a
Step 5/15 : ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
---> Running in 10e0c92eed4f
Removing intermediate container 10e0c92eed4f
---> 707279c92614
Step 6/15 : RUN echo "${SSH_PRIVATE_KEY}"
---> Running in a9f75c224994
C:\Users\MyUser\.ssh\id_rsa
Removing intermediate container a9f75c224994
---> 96e0605d38a9
Step 7/15 : ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
ADD failed: stat /var/lib/docker/tmp/docker-
builder142890167/C:\Users\MyUser\.ssh\id_rsa: no such file or
directory

From the Documentation:
ADD obeys the following rules:
The path must be inside the context of the build; you cannot ADD
../something /something, because the first step of a docker build is
to send the context directory (and subdirectories) to the docker
daemon.
You are passing an absolute path to ADD, but you can see from the error:
/var/lib/docker/tmp/docker-builder142890167/C:\Users\MyUser\.ssh\id_rsa:
no such file or directory
It is being looked for within the build context. Again from the documentation:
Traditionally, the Dockerfile is called Dockerfile and located in the
root of the context.
So, you need to place the RSA key somewhere in the directory tree which has it's root at the path that you specify in your Docker build command, so if you are entering docker build . your ARG statement would change to something like:
ARG SSH_PRIVATE_KEY = .\.ssh\id_rsa

Related

Error copying folder using COPY --from from another layer

The Dockerfile uses the COPY --from command from the other build Node layer, but the generated directory is not found.
Note 1: This Dockerfile works locally on my machine doing builds normally.
Note 2: In the execution log it mentions the removal of an intermediate container, is that it? Would it be possible to preserve this container so that the copy works?
FROM node:16.16 as build
# USER node
WORKDIR /app
COPY package.json /app
RUN npm install --location=global npm#latest && npm install --silent
COPY . .
ARG SCRIPT
ENV SCRIPT=$SCRIPT
ARG API_URL
ENV API_URL=$API_URL
ARG API_SERVER
ENV API_SERVER=$API_SERVER
CMD ["/bin/sh", "-c", "envsubst < src/proxy.conf.template.js > src/proxy.conf.js"]
RUN npm run ${SCRIPT}
FROM nginx:1.23
VOLUME /var/cache/nginx
EXPOSE 80
COPY --from=build /app/dist/siai-spa /usr/share/nginx/html
COPY ./config/nginx-template.conf /etc/nginx/nginx-template.conf
b9ed43dcc388: Pull complete
Digest: sha256:db345982a2f2a4257c6f699a499feb1d79451a1305e8022f16456ddc3ad6b94c
Status: Downloaded newer image for nginx:1.23
---> 41b0e86104ba
Step 15/24 : VOLUME /var/cache/nginx
---> Running in dc0e24ae6e51
Removing intermediate container dc0e24ae6e51
---> 3b2799dad197
Step 16/24 : EXPOSE 80
---> Running in f30edd617285
Removing intermediate container f30edd617285
---> 21985745ce49
Step 17/24 : COPY --from=build /app/dist/siai-spa /usr/share/nginx/html
COPY failed: stat app/dist/siai-spa: file does not exist
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
I guess, you should use CMD instead of RUN while npm run ${SCRIPT} as this needs to be executed during container running time rather than image build time.
Solved problem!
The difference was that locally I used docker-compose which captures the build arguments from the .env file. The npm run command did not override the ${SCRIPT} variable as the docker command does not use the env file, required to pass through the --build-arg parameters.

docker container couldn't locate file but file is present

I am trying to package gotty into a Docker container but found a weird behavior.
$ tree
.
├── Dockerfile
├── gotty
└── gotty_linux_amd64.tar.gz
Dockerfile:
FROM alpine:3.11.3
RUN mkdir -p /home/gotty
WORKDIR /home/gotty
COPY gotty /home/gotty
RUN chmod +x /home/gotty/gotty
CMD ["/bin/sh"]
The image was built without issue:
[strip...]
Removing intermediate container 0dee1ab645e0
---> b5c6957d36e1
Step 7/9 : COPY gotty /home/gotty
---> fb1a1adec04a
Step 8/9 : RUN chmod +x /home/gotty/gotty
---> Running in 90031140da40
Removing intermediate container 90031140da40
---> 609e1a5453f7
Step 9/9 : CMD ["/bin/sh"]
---> Running in 30ce65cd4339
Removing intermediate container 30ce65cd4339
---> 099bc22ee6c0
Successfully built 099bc22ee6c0
The chmod changed the file mode successfully. So /home/gotty/gotty is present.
$ docker run -itd 099bc22ee6c0
9b219a6ef670b9576274a7b82a1b2cd813303c6ea5280e17a23a917ce809c5fa
$ docker exec -it 9b219a6ef670 /bin/sh
/home/gotty # ls
gotty
/home/gotty # ./gotty
/bin/sh: ./gotty: not found
Go into the container, the gotty command is there. I ran it with relative path. Why the not found?
You are running into one of the more notorious problems with Alpine: Musl, instead of glibc. Check out the output of ldd gotty. Try adding libc6-compat:
apk add libc6-compat
and see if that fixes it.

Dockerfile, ARG and ENV not working

I have following Dockerfile
#Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
# ORACLE DOCKERFILES PROJECT
# --------------------------
# This Dockerfile extends the Oracle WebLogic image by creating a sample domain.
#
# Util scripts are copied into the image enabling users to plug NodeManager
# automatically into the AdminServer running on another container.
#
# HOW TO BUILD THIS IMAGE
# -----------------------
# Put all downloaded files in the same directory as this Dockerfile
# Run:
# $ sudo docker build -t 12213-domain
#
# Pull base image
# ---------------
FROM oracle/weblogic:12.2.1.3-developer
# Maintainer
# ----------
MAINTAINER Monica Riccelli <monica.riccelli#oracle.com>
ARG DOMAIN_NAME
ARG ADMIN_PORT
ARG ADMIN_NAME
ARG ADMIN_USERNAME
ARG ADMIN_PASSWORD
# WLS Configuration
# ---------------------------
ENV ADMIN_HOST="wlsadmin" \
NM_PORT="5556" \
MS_PORT="8001" \
DEBUG_PORT="8453" \
ORACLE_HOME=/u01/oracle \
SCRIPT_FILE=/u01/oracle/createAndStartWLSDomain.sh \
CONFIG_JVM_ARGS="-Dweblogic.security.SSL.ignoreHostnameVerification=true" \
PATH=$PATH:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/${DOMAIN_NAME:-base_domain}/bin:/u01/oracle
# Domain and Server environment variables
# ------------------------------------------------------------
ENV DOMAIN_NAME="${DOMAIN_NAME}" \
PRE_DOMAIN_HOME=/u01/oracle/user_projects \
ADMIN_PORT="${ADMIN_PORT}" \
ADMIN_USERNAME="${ADMIN_USERNAME}" \
ADMIN_NAME="${ADMIN_NAME}" \
MS_NAME="${MS_NAME:-""}" \
NM_NAME="${NM_NAME:-""}" \
ADMIN_PASSWORD="${ADMIN_PASSWORD}" \
CLUSTER_NAME="${CLUSTER_NAME:-DockerCluster}" \
DEBUG_FLAG=true \
PRODUCTION_MODE=dev
# Add files required to build this image
COPY container-scripts/* /u01/oracle/
#Create directory where domain will be written to
USER root
RUN chmod +xw /u01/oracle/*.sh && \
chmod +xw /u01/oracle/*.py && \
mkdir -p $PRE_DOMAIN_HOME && \
chmod a+xr $PRE_DOMAIN_HOME && \
chown -R oracle:oracle $PRE_DOMAIN_HOME
VOLUME $PRE_DOMAIN_HOME
# Expose Node Manager default port, and also default for admin and managed server
EXPOSE $NM_PORT $ADMIN_PORT $MS_PORT $DEBUG_PORT
USER oracle
WORKDIR $ORACLE_HOME
# Define default command to start bash.
CMD ["/u01/oracle/createAndStartWLSDomain.sh"]
Which I build using command
#!/bin/sh
#
#Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
docker build -t 12213-domain \
--no-cache \
--build-arg DOMAIN_NAME=domain \
--build-arg ADMIN_PORT=7001 \
--build-arg ADMIN_NAME=admin \
--build-arg ADMIN_USERNAME=wlsuser \
--build-arg ADMIN_PASSWORD=wlsuser1 \
.
And get the following build log
$ ./build.sh
Sending build context to Docker daemon 51.71kB
Step 1/17 : FROM oracle/weblogic:12.2.1.3-developer
---> 15ba3f59a9f9
Step 2/17 : MAINTAINER Monica Riccelli <monica.riccelli#oracle.com>
---> Running in ac70adb36a4b
Removing intermediate container ac70adb36a4b
---> fe34e24ffce7
Step 3/17 : ARG DOMAIN_NAME
---> Running in 073a89d7613c
Removing intermediate container 073a89d7613c
---> de10930a27d6
Step 4/17 : ARG ADMIN_PORT
---> Running in d213833315c2
Removing intermediate container d213833315c2
---> 9af410c46028
Step 5/17 : ARG ADMIN_NAME
---> Running in 2baee277da54
Removing intermediate container 2baee277da54
---> a76f3f3d6642
Step 6/17 : ARG ADMIN_USERNNAME
---> Running in 9127852dae20
Removing intermediate container 9127852dae20
---> bb9af74b5804
Step 7/17 : ARG ADMIN_PASSWORD
---> Running in 4d0b1969605b
Removing intermediate container 4d0b1969605b
---> af18d5b6be2d
Step 8/17 : ENV ADMIN_HOST="wlsadmin" NM_PORT="5556" MS_PORT="8001"
DEBUG_PORT="8453" ORACLE_HOME=/u01/oracle SCRIPT_FILE=/u01/oracle/create
AndStartWLSDomain.sh CONFIG_JVM_ARGS="-Dweblogic.security.SSL.ignoreHostname
Verification=true" PATH=$PATH:/u01/oracle/oracle_common/common/bin:/u01/ora
cle/wlserver/common/bin:/u01/oracle/user_projects/domains/${DOMAIN_NAME:-base_do
main}/bin:/u01/oracle
---> Running in 449a28590d90
Removing intermediate container 449a28590d90
---> 2a1ddd961d5c
Step 9/17 : ENV DOMAIN_NAME="${DOMAIN_NAME}" PRE_DOMAIN_HOME=/u01/oracle/use
r_projects ADMIN_PORT="${ADMIN_PORT}" ADMIN_USERNAME="${ADMIN_USERNAME}
" ADMIN_NAME="${ADMIN_NAME}" MS_NAME="${MS_NAME:-""}" NM_NAME="${NM_
NAME:-""}" ADMIN_PASSWORD="${ADMIN_PASSWORD}" CLUSTER_NAME="${CLUSTER_NA
ME:-DockerCluster}" DEBUG_FLAG=true PRODUCTION_MODE=dev
---> Running in 0b01881a1ca4
Removing intermediate container 0b01881a1ca4
---> 7a3cd53ea5a3
Step 10/17 : COPY container-scripts/* /u01/oracle/
---> ce67247c3f7e
Step 11/17 : USER root
---> Running in 61adcafc1226
Removing intermediate container 61adcafc1226
---> f9a781fda963
Step 12/17 : RUN chmod +xw /u01/oracle/*.sh && chmod +xw /u01/oracle/*.py &&
mkdir -p $PRE_DOMAIN_HOME && chmod a+xr $PRE_DOMAIN_HOME && chown -
R oracle:oracle $PRE_DOMAIN_HOME
---> Running in 82b7b258d6f1
Removing intermediate container 82b7b258d6f1
---> 0cda254bc640
Step 13/17 : VOLUME $PRE_DOMAIN_HOME
---> Running in 6650ff8092d3
Removing intermediate container 6650ff8092d3
---> c469ff0ac9a2
Step 14/17 : EXPOSE $NM_PORT $ADMIN_PORT $MS_PORT $DEBUG_PORT
---> Running in a551b6bd5363
Removing intermediate container a551b6bd5363
---> 08253c4d94bd
Step 15/17 : USER oracle
---> Running in f1e6b4e482e9
Removing intermediate container f1e6b4e482e9
---> 85a75641e866
Step 16/17 : WORKDIR $ORACLE_HOME
Removing intermediate container a2b75ecbb0b6
---> 0124a8251ce3
Step 17/17 : CMD ["/u01/oracle/createAndStartWLSDomain.sh"]
---> Running in 1455fdc0d39d
Removing intermediate container 1455fdc0d39d
---> c1c947c2816c
[Warning] One or more build-args [ADMIN_USERNAME] were not consumed
Successfully built c1c947c2816c
Successfully tagged 12213-domain:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Win
dows Docker host. All files and directories added to build context will have '-r
wxr-xr-x' permissions. It is recommended to double check and reset permissions f
or sensitive files and directories.
Note the message
[Warning] One or more build-args [ADMIN_USERNAME] were not consumed
When I start the container and print the environment
FMW_PKG=fmw_12.2.1.3.0_wls_quick_Disk1_1of1.zip
CONFIG_JVM_ARGS=-Dweblogic.security.SSL.ignoreHostnameVerification=true
HOSTNAME=74adf82e8092
PRODUCTION_MODE=dev
TERM=xterm
ADMIN_NAME=AdminServer
CLUSTER_NAME=DockerCluster
FMW_JAR=fmw_12.2.1.3.0_wls_quick.jar
USER_MEM_ARGS=-Djava.security.egd=file:/dev/./urandom
DEBUG_FLAG=true
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd
=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;4
2:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:
*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=0
1;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;
31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=0
1;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sa
r=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:
*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01
;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.ti
ff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;3
5:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp
4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:
*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01
;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=
01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.f
lac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;
36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
MS_NAME=
DOMAIN_NAME=base_domain
SCRIPT_FILE=/u01/oracle/createAndStartWLSDomain.sh
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/defa
ult/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u0
1/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/us
er_projects/domains/base_domain/bin:/u01/oracle
ADMIN_HOST=wlsadmin
DOMAIN_HOME=/u01/oracle/user_projects/domains/base_domain
NM_NAME=
PWD=/u01/oracle
DEBUG_PORT=8453
ADMIN_PORT=7001
JAVA_HOME=/usr/java/default
JAVA_PKG=server-jre-8u*-linux-x64.tar.gz
SHLVL=1
HOME=/u01/oracle
ADMIN_USERNAME=weblogic
NM_PORT=5556
PRE_DOMAIN_HOME=/u01/oracle/user_projects
ADMIN_PASSWORD=
ORACLE_HOME=/u01/oracle
MS_PORT=8001
_=/usr/bin/env
The environment variables are not set correctly, but uses the environment variables from the base image. I want to override those variables.
EDIT:
The problem is either --build-arg not passing the parameters to ARG, or ARG not correctly binding to ENV. Environment is overridden correctly, if I use constant string in the ENV. Even more interestingly, if I don't pass --build-arg to those ARG variables, I don't get any warning or error and still get the exactly same build log. AFAIK, unbound ARG without default value should raise an error.
Is this a bug? I'm using docker-toolbox on windows, and here is the docker version
docker-machine.exe version 0.14.0, build 89b8332
Docker version 18.03.0-ce, build 0520e24302
From docs.docker.com:
Environment variables defined using the ENV instruction always
override an ARG instruction of the same name
So the only way to resolve this is to use different name for ARG variables.
There is a typo in your ARG declaration.
I think change that to ADMIN_USERNAME will fix your problem.

Multi-stage build - no such file or directory

My multi-stage build isn't finding the /assets/css directory when it is time to copy it. What do I need to change?
Docker version
Version 17.06.0-ce-mac17 (18432)
Channel: edge
4bb7a7dfa0
myimage:sass Image that I use in the multi-stage build
FROM ruby:2.4.1-slim
RUN mkdir /assets
VOLUME /assets
RUN gem install sass
ENTRYPOINT ["sass"]
Multi-stage build Dockerfile
Note the debugging commands I run on the first image cd /assets/css && ls && pwd, the result is shown during the building phase.
# Compile Sass
FROM myimage/sass AS builder
COPY app/assets/sass /assets/sass
RUN sass --update --force --sourcemap=none --stop-on-error /assets/sass:/assets/css &&\
# sass directory isn't needed
rm -r assets/sass &&\
# debugging: check /assets/css exists inside the container
cd /assets/css && ls && pwd
FROM alpine:3.6
WORKDIR /app
RUN mkdir /app/logs
VOLUME ["/app/logs"]
COPY --from=builder /assets/css /app/assets/css
EXPOSE 80
CMD ["./bin/bash"]
Raw
Output of docker build -t myimage:css .
Notice the output of cd /assets/css && ls && pwd on step 3/11 showing the /assets/css directory exists and it has a main.css file inside
Step 1/11 : FROM myimage:sass AS builder
---> 7c6662186d55
Step 2/11 : COPY app/assets/sass /assets/sass
---> 76b5d86846b8
Removing intermediate container ee74d16617b4
Step 3/11 : RUN sass --update --force --sourcemap=none --stop-on-error /assets/sass:/assets/css && rm -r assets/sass && cd /assets/css && pwd
---> Running in 83dc591edc5c
directory /assets/css
write /assets/css/main.css
main.css
/assets/css
---> 3939f46fb355
Removing intermediate container 83dc591edc5c
Step 4/11 : FROM alpine:3.6
---> 7328f6f8b418
Step 5/11 : WORKDIR /app
---> 19ad596f9fc1
Removing intermediate container 790fac2040f1
Step 6/11 : RUN mkdir /app/logs
---> Running in cb66151a4694
---> 18d6c4970d04
Removing intermediate container cb66151a4694
Step 7/11 : VOLUME /app/logs
---> Running in b8a98a38a054
---> fa68603ccf30
Removing intermediate container b8a98a38a054
Step 8/11 : COPY --from=builder /assets/css /app/assets/css
COPY failed: stat /var/lib/docker/aufs/mnt/0ddcc250ed9c4eb1de46305e62cb2303274b027b2c9b0ddd09471fce3c3ed619/assets/css: no such file or directory
So why can't /assets/css be copied into the final image? Why can't the Docker engine find it?
This is the problem:
VOLUME /assets
Each build step is a new running container. Each new container will discard whatever drops in the unnamed volume.
Check that with:
RUN sass --update --force --sourcemap=none --stop-on-error /assets/sass:/assets/css &&\
# sass directory isn't needed
rm -r assets/sass
RUN cd /assets/css && ls && pwd
(A separate RUN)

Unable to copy source code with docker build

I have an npm package where we are keeping our common code and publishing it to an internal repository. The package name is docker-images. Inside that I have a dockerfile with the following
FROM <Our internal base image>
# Setting src variable.
ARG src
# Set working directory
WORKDIR /home/default
USER root
# Copy the src code
COPY $src /home/default
# Install all the dependencies
RUN npm install
# Change permissions to default user and ensure we enter at the right spot
RUN chown -R default:default /home/default
USER default
Also in this package I have a shell script with that does the building
OPTIND=1 # Reset getopts in case it was changed in a previous run
while getopts "h::f::s::" opt; do
case "$opt" in
h)
exit 1
;;
f)
dockerfile=$OPTARG
;;
s)
src=$OPTARG
;;
*)
exit 1
;;
esac
done
docker build --pull=true --build-arg "src=${src}" --tag="latest" --file=${dockerfile} ${src}
From another npm package I have a script which calls this script to build it that script does
npm install docker-images
PKG_ROOT=$(cd "$(dirname "$BASH_SOURCE")" && cd ../ && pwd)
./node_modules/docker-images/scripts/publish.sh -f "$PKG_ROOT/node_modules/docker-images/dockerfiles/dockerfile" -s "$PKG_ROOT"
However when builds on our jenkins box it gives me the error
Step 3 : ARG src
---> Using cache
---> 09e6987081e7
Step 4 : WORKDIR /home/default
---> Using cache
---> d4f1edf337ca
Step 5 : USER root
---> Using cache
---> f5e52439f60f
Step 6 : COPY $src /home/default
lstat home/jenkins-slave/workspace/dockerbuild: no such file or directory
I also printed out the command that my shell script is calling which is
docker build --pull=true --build-arg src=/home/jenkins-slave/workspace/dockerbuild --file=/home/jenkins-slave/workspace/dockerbuild/node_modules/docker-images/dockerfiles/dockerfile /home/jenkins-slave/workspace/dockerbuild
Obviously the path /home/jenkins-slave/workspace/dockerbuild exists since it can find the dockerfile but I don't know why it won't copy the src
The paths in docker are all relative , so just as an experiment can you try the following where you are copying the source:
WORKDIR $src
COPY . /home/default
Rambler is right, the paths need to be relative to the Docker build context, but you don't need to change your Dockerfile, just use a relative path in the build-arg value.
With this simple Dockerfile:
FROM alpine
ARG src
COPY $src .
You will get the failure if you use a full path in the argument:
> docker build -t temp --build-arg src=/home/scrapbook/tutorial/src .
...
Step 3 : COPY $src .
lstat home/scrapbook/tutorial/src: no such file or directory
But if you use a relative path from the build context, the same Dockerfile is fine:
> docker build -t temp --build-arg src=./src .
...
Successfully built d4899d51a284

Resources