I have the following file arrangement for a docker image (salmon):
salmon
├── docker
│ └── Dockerfile
└── src
├── align_utils.py
├── job_utils.py
├── run_salmon.py
└── s3_utils.py
My entrypoint script in this case is run_salmon.py, which also makes use of the other .py scripts in src/. When I try to build the docker image via docker build -t salmon:pipeline . within docker/, I get the error:
COPY failed: stat /var/lib/docker/tmp/docker-builder013511307/src/run_salmon.py: no such file or directory
How do I figure out where the entrypoint script is located relative to the working dir in the dockerfile?
Dockerfile:
# Use Python base image from DockerHub
FROM python:2.7
# INSTALL CMAKE
RUN apt-get update && apt-get install -y sudo \
&& sudo apt-get update \
&& sudo apt-get install -y \
cmake \
wget
#INSTALL BOOST
RUN wget https://dl.bintray.com/boostorg/release/1.66.0/source/boost_1_66_0.tar.gz \
&& mv boost_1_66_0.tar.gz /usr/local/bin/ \
&& cd /usr/local/bin/ \
&& tar -xzf boost_1_66_0.tar.gz \
&& cd ./boost_1_66_0/ \
&& ./bootstrap.sh \
&& ./b2 install
#INSTALL SALMON
RUN wget https://github.com/COMBINE-lab/salmon/releases/download/v0.14.1/salmon-0.14.1_linux_x86_64.tar.gz \
&& mv salmon-0.14.1_linux_x86_64.tar.gz /usr/local/bin/ \
&& cd /usr/local/bin/ \
&& tar -xzf salmon-0.14.1_linux_x86_64.tar.gz \
&& cd salmon-latest_linux_x86_64/
ENV PATH=$PATH:/usr/local/bin/salmon-latest_linux_x86_64/bin/
# Copy files to root directory of a Docker
WORKDIR /
COPY src/run_salmon.py /
COPY src/s3_utils.py /
COPY src/job_utils.py /
COPY src/align_utils.py /
ENTRYPOINT ["python", "/run_salmon.py"]
When you run docker build -t salmon:pipeline . from inside the docker directory, you are specifying the current directory as a context for the build.
When the build run COPY src/run_salmon.py / it tries to find the path relative to the root of your context (i.e., salmon/docker/src/run_salmon.py), where the files don't exist.
It's better that you specify your root context as the salmon directory, specifying the full path of the Dockerfile with the -f flag. Run this from inside salmon directory:
docker build -t salmon:pipeline -f docker/Dockerfile .
Related
We are having a docker image whose size is around 858 MB. We want to reduce it's size.
Out of 858 MB
APP Size: 352 MB (EAR FILE)
WILDFLY (18.0.1): 212 MB
adoptopenjdk:11: 123 MB
USR DIR : 116 MB
Is there any way to reduce size?
First Dockerfile --wildfly-11.8:latest
FROM adoptopenjdk/openjdk11:jre-11.0.6_10-alpine
ENV WILDFLY_VERSION 18.0.1.Final
ENV WILDFLY_SHA1 ef0372589a0f08c53e7291721a7e3f7d9
ENV MODULES_FILENAME modules.tar
ENV MODULES_SHA1 2dcfee4045b7d026d7d6290cebc772482
ENV JBOSS_HOME /opt/wildfly
RUN useradd -ms /bin/bash jboss
RUN cd $HOME \
&& curl -O -k https://download.jboss.org/wildfly/$WILDFLY_VERSION/wildfly-$WILDFLY_VERSION.tar.gz \
&& sha1sum wildfly-$WILDFLY_VERSION.tar.gz | grep $WILDFLY_SHA1 \
&& tar xf wildfly-$WILDFLY_VERSION.tar.gz \
&& mv $HOME/wildfly-$WILDFLY_VERSION $JBOSS_HOME \
&& rm wildfly-$WILDFLY_VERSION.tar.gz \
&& chown -R jboss:0 ${JBOSS_HOME} \
&& chmod -R g+rw ${JBOSS_HOME} \
&& curl -p ftp://ftp.co.il//wildfly/1801/$MODULES_FILENAME --user "app:qax" --ftp-create-dirs -O \
&& sha1sum $MODULES_FILENAME | grep $MODULES_SHA1 \
&& tar xf ${MODULES_FILENAME} \
&& rm ${MODULES_FILENAME} \
&& cp -r ./* ${JBOSS_HOME} \
&& rm -rf ./*
COPY configuration /opt/wildfly/standalone/configuration
COPY standalone.conf /opt/wildfly/bin
USER jboss
EXPOSE 8080
CMD cd /opt/wildfly/bin && ./standalone.sh -b="0.0.0.0" -c=$STANDALONE_CONFIG
Second Dockerfile
FROM wildfly-11.8:latest
ENV DEPLOYMENT_LOCATION /opt/wildfly/standalone/deployments
ARG ear_file_path
COPY $ear_file_path $DEPLOYMENT_LOCATION
There are many ways to reduce the size of a Docker image.
1. Use an alpine image or minimum sized one as the base image.
For an official Docker image in the Docker hub, there will be different tags for a single repository. Example: ubuntu:stable, ubuntu:latest, ubuntu:alpine, where alpine would consume the least size.
2. Use Multi Staged Dockerfile.
When writing a Dockerfile, split the requirement into sections.
Like:
(a) If you need to copy the files to the work directory first, do them in the first stage.
(b) Next if you require to install packages, do that in the next stage.
(c) And, finally you need to expose a port and run some commands, do that in the last and final stage.
Example dockerfile:
# Use an official Ubuntu runtime as a base image.
FROM ubuntu:latest AS base
#To create working directory.
WORKDIR /application
# To add contents to the working directory.
ADD . /application
# Use the base image to create a packaged image.
FROM base AS package
#To create add to the existing working directory.
WORKDIR /application
# To install necessary packages.
RUN apt update \
&& apt install nginx -y
&& apt install docker -y
# Use the packaged image to create a final image.
FROM package AS final
#To create add to the existing working directory.
WORKDIR /application
# To Run Commands after the container is run.
CMD ["echo", "Hello world"]
This way, you can reduce the size of the Docker image thats created.
Refer: https://docs.docker.com/develop/develop-images/multistage-build/
Thanks.
I need to COPY a native library to docker image's /usr/lib directory. My Quarkus project is in IntelliJ and I have tried putting this binary .so file in target/resources folder and modified the Dockerfile.jvm as follows but the file was not copied. Below are the contents of the Dockerfile that Quarkus generated during skaffolding, I added one line to COPY section:
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.1
ARG JAVA_PACKAGE=java-11-openjdk-headless
ARG RUN_JAVA_VERSION=1.3.8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# Install java and the run-java script
# Also set up permissions for user `1001`
RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \
&& microdnf update \
&& microdnf clean all \
&& mkdir /deployments \
&& chown 1001 /deployments \
&& chmod "g+rwX" /deployments \
&& chown 1001:root /deployments \
&& curl https://repo1.maven.org/maven2/io/fabric8/run-java-
sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o
/deployments/run-java.sh \
&& chown 1001 /deployments/run-java.sh \
&& chmod 540 /deployments/run-java.sh \
&& echo "securerandom.source=file:/dev/urandom" >>
/etc/alternatives/jre/lib/security/java.security
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0
-Djava.util.logging.manager=org.jboss.logmanager.LogManager"
COPY target/lib/* /deployments/lib/
COPY target/*-runner.jar /deployments/app.jar
#below is the only change I made #
COPY target/resources/calc.so /usr/lib/
#end of my change #
EXPOSE 8080
USER 1001
ENTRYPOINT [ "/deployments/run-java.sh" ]
I also tried running at my project root
docker build -f src/main/docker/Dockerfile.jvm -t mylogin/demo:1.0-SNAPSHOT .
I get an error :
COPY failed: stat /var/lib/docker/tmp/docker-builder707527817/target/resources/calc.so: no such file or directory
Any help is much appreciated.
Thanks
It was the .dockerIgnore file auto-generated when I skaffolded Quarkus application. Make sure it allows copying all the file types you need to copy. Thanks.
Hi I have a docker file which is failing on the COPY command. It was running fine initially but then it suddenly crashed during the build process. The Docker file basically sets up the dev environment and authenticate with GCP.
FROM ubuntu:16.04
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar \
xz-utils \
bc \
build-essential \
cmake \
curl \
zlib1g-dev \
libssl-dev \
libsqlite3-dev \
python3-pip \
python3-setuptools \
unzip \
g++ \
git \
python-tk
# Install Python 3.6.5
RUN wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz \
&& tar -xvf Python-${PYTHON_VERSION}.tar.xz \
&& rm -rf Python-${PYTHON_VERSION}.tar.xz \
&& cd Python-${PYTHON_VERSION} \
&& ./configure \
&& make install \
&& cd / \
&& rm -rf Python-${PYTHON_VERSION}
# Install pip
RUN curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& rm get-pip.py
# Add SNI support to Python
RUN pip --no-cache-dir install \
pyopenssl \
ndg-httpsclient \
pyasn1
## Download and Install Google Cloud SDK
RUN mkdir -p /usr/local/gcloud \
&& curl https://sdk.cloud.google.com > install.sh \
&& bash install.sh --disable-prompts --install-dir=${DIRECTORY}
# Adding the package path to directory
ENV PATH $PATH:${DIRECTORY}/google-cloud-sdk/bin
# working directory
WORKDIR /usr/src/app
COPY requirements.txt ./ \
testproject-264512-9de8b1b35153.json ./
It fails at this step :
Step 13/21 : COPY requirements.txt ./ testproject-264512-9de8b1b35153.json ./
COPY failed: stat /var/lib/docker/tmp/docker-builder942576416/testproject-264512-9de8b1b35153.json: no such file or directory
Any leads in this would be helpful.
How are you running docker build command?
In docker best practices I've read that docker fails if you try to build your image from stdin using -
Attempting to build a Dockerfile that uses COPY or ADD will fail if this syntax is used. The following example illustrates this:
# create a directory to work in
mkdir example
cd example
# create an example file
touch somefile.txt
docker build -t myimage:latest -<<EOF
FROM busybox
COPY somefile.txt .
RUN cat /somefile.txt
EOF
# observe that the build fails
...
Step 2/3 : COPY somefile.txt .
COPY failed: stat /var/lib/docker/tmp/docker-builder249218248/somefile.txt: no such file or directory
I've reproduced issue... Here is my Dockerfile:
FROM alpine:3.7
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# working directory
WORKDIR /usr/src/app
COPY kk.txt ./ \
kk.2.txt ./
If I create image by running docker build -t testImage:1 [DOCKERFILE_FOLDER], docker creates image and works correctly.
However if I try the same command from stdin as:
docker build -t test:2 - <<EOF
FROM alpine:3.7
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
WORKDIR /usr/src/app
COPY kk.txt ./ kk.2.txt ./
EOF
I get the following error:
Step 1/6 : FROM alpine:3.7
---> 6d1ef012b567
Step 2/6 : ENV PYTHON_VERSION="3.6.5"
---> Using cache
---> 734d2a106144
Step 3/6 : ENV BUCKET_NAME='detection-sandbox'
---> Using cache
---> 18fba29fffdc
Step 4/6 : ENV DIRECTORY='/usr/local/gcloud'
---> Using cache
---> d926a3b4bc85
Step 5/6 : WORKDIR /usr/src/app
---> Using cache
---> 57a1868f5f27
Step 6/6 : COPY kk.txt ./ kk.2.txt ./
COPY failed: stat /var/lib/docker/tmp/docker-builder518467298/kk.txt: no such file or directory
It seems that docker build images from /var/lib/docker/tmp/ if you build image from stdin, thus ADD or COPY commands don't work.
Incorrect path in source is a common error.
Use
COPY ./directory/testproject-264512-9de8b1b35153.json /dir/
instead of
COPY testproject-264512-9de8b1b35153.json /dir/
I have a problem with my Dockerfile. I'm using the command like this: COPY main.py /volume1/Files/ITenso-monitor
When I build the Dockerfile, I get this error: COPY failed: stat /volume1/#docker/tmp/docker-builder543642662/main.py: no such file or directory
When I run this command:
RUN cd /volume1/Bestanden/ITenso-monitor/ && pwd \
&& ls
I get this ouput:
__pycache__
main.py
src
The whole Dockerfile:
FROM debian
FROM python:3.7
RUN apt-get update \
&& apt-get install -y git \
&& apt-get install -y openssh-server \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& pip3 install requests
RUN git config --global user.name "username" && \
git config --global user.password "password" && \
git clone https://Username:Token#gitlab.com/group/repo.git /volume1/Bestanden/ITenso-monitor/
RUN cd /volume1/Bestanden/ITenso-monitor/ && pwd \
&& ls
COPY main.py /volume1/Bestanden/ITenso-monitor/
CMD ["python3", "main.py"]
What is causing this problem... because main.py exists in the directory.
I hope someone is able to help me. Thanks in advance!
Dockerfile context is your problem. Every command you give the dockerfile is executed in its context which means the current path. So if you make a copy of a file then the Dockerfile expects the file to be in the current path or a subdirectory if specified.
Let's take as an example your COPY line.
COPY main.py /volume1/Bestanden/ITenso-monitor/
Dockefile expects the path to be at
./Dockerfile
./main.py
Here is a link to the documentation https://docs.docker.com/engine/reference/commandline/build/
I have the following code
RUN apt-get update
RUN apt-get install -y wget #install wget lib
RUN mkdir -p example && cd example #create folder and cd to folder
RUN WGET -r https://host/file.tar && tar -xvf *.tar # download tar file to example folder and untar it in same folder
RUN rm -r example/*.tar # remove the tar file
RUN MV example/foo example/bar # rename untar directory from foo to bar
But i get the following errors:
/bin/sh: 1: WGET: not found
tar: example/*.tar: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
I am a newbie in docker.
Each subsequent RUN command in your Dockerfile will be in the context of the / directory. Therefore your .tar file is not in the example/ directory, it would actually be in the / directory since your 'cd to the folder' would mean nothing for subsequent RUN commands. Instead of doing cd example, rather do WORKDIR example before running wget, eg:
RUN apt-get update
RUN apt-get install -y wget #install wget lib
RUN mkdir -p example # create folder and cd to folder
WORKDIR example/ # change the working directory for subsequent commands
RUN wget -r https://host/file.tar && tar -xvf *.tar # download tar file to example folder and untar it in same folder
RUN rm -r example/*.tar # remove the tar file
RUN mv example/foo example/bar # rename untar directory from foo to bar
Or alternatively, add cd example && ... some command before any command you'd like to execute within theexample directory.
As Ntokozo stated, each RUN command is a separate "session" in the build process. As such, Docker is really designed to pack as many commands into a single RUN as possible allowing for smaller overall image size and fewer layers. So the command could be written like so:
RUN apt-get update && \
apt-get install -y wget && \
mkdir -p example && \
cd example/ && \
wget -r https://host/file.tar && \
tar -xvf *.tar && \
rm -r example/*.tar && \
mv example/foo example/bar