Editing a Dockerfile so the image is not run in an interactive shell, but from the ENTRYPOINT - docker

I'm trying to use the docker image of a specific software (picard), and that image is designed to be run interactively, in fact, an already built docker image is provided via Dockerhub:
docker pull broadinstitute/picard
This image works perfectly will the following command:
sudo docker run -i -t -v $PWD:/usr/working broadinstitute/picard
So that within the image one could launch the actual program like:
java -jar /usr/picard/picard.jar [COMMAND] [OPTIONS] ...
What I am trying to accomplish is to execute this image without entering in an interactive shell, simply as:
sudo docker run --rm -v $PWD:/usr/working broadinstitute/picard [COMMAND] [OPTIONS] ...
As far as I understand, this could be done by creating an ENTRYPOINT in the Dockerfile (see it in appendix below), but addind the following line at the bottom of the Dockerfile will not work:
ENTRYPOINT ["java -jar /usr/picard/picard.jar"]
Instead, when I run the image as above, no output is generated, and if a specific command is called (e.g. CreateSequenceDictionary), I get the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"CreateSequenceDictionary\": executable file not found in $PATH": unknown.
What am I missing?
Dockerfile
The dockerfile can be found in the github repo at: https://github.com/broadinstitute/picard/blob/master/Dockerfile. It looks like the following:
FROM openjdk:8
MAINTAINER Broad Institute DSDE <dsde-engineering#broadinstitute.org>
ARG build_command=shadowJar
ARG jar_name=picard.jar
# Install ant, git for building
RUN apt-get update && \
apt-get --no-install-recommends install -y --force-yes \
git \
r-base \
ant && \
apt-get clean autoclean && \
apt-get autoremove -y
# Assumes Dockerfile lives in root of the git repo. Pull source files into container
COPY / /usr/picard/
WORKDIR /usr/picard
# Build the distribution jar, clean up everything else
RUN ./gradlew ${build_command} && \
mv build/libs/${jar_name} picard.jar && \
./gradlew clean && \
rm -rf src && \
rm -rf gradle && \
rm -rf .git && \
rm gradlew && \
rm build.gradle
RUN mkdir /usr/working
WORKDIR /usr/working

The problem is how the ENTRYPOINT is defined.
It should be
ENTRYPOINT ["java", "-jar", "/usr/picard/picard.jar"]
src: https://docs.docker.com/v17.09/engine/reference/builder/#entrypoint

I just learnt about other possible alternative that does not require to modify the original dockerfile, which is overriding the default ENTRYPOINT from the CLI (even if the original image does not define one).
This can be done with the --entrypoint option, but it will only take the name of the file to be executed within the image. If additional arguments are to be used, these must be called after the image name. For example:
docker run --entrypoint="java" $PWD:/usr/working broadinstitute/picard -jar /usr/picard/picard.jar [COMMAND] [OPTIONS]
This blog explains a bit more on the topic.

Related

Extending a docker image to run a diff command

I have a docker file as below.
FROM registry.access.redhat.com/ubi8/ubi
ENV JAVA_HOME /usr/lib/jvm/zulu11
RUN \
set -xeu && \
yum -y -q install https://cdn.azul.com/zulu/bin/zulu-repo-1.0.0-1.noarch.rpm && \
yum -y -q install python3 zulu11-jdk less && \
yum -q clean all && \
rm -rf /var/cache/yum && \
alternatives --set python /usr/bin/python3 && \
groupadd trino --gid 1000 && \
useradd trino --uid 1000 --gid 1000 && \
mkdir -p /usr/lib/trino /data/trino && \
chown -R "trino:trino" /usr/lib/trino /data/trino
ARG TRINO_VERSION
COPY trino-cli-${TRINO_VERSION}-executable.jar /usr/bin/trino
COPY --chown=trino:trino trino-server-${TRINO_VERSION} /usr/lib/trino
COPY --chown=trino:trino default/etc /etc/trino
EXPOSE 8080
USER trino:trino
ENV LANG en_US.UTF-8
CMD ["/usr/lib/trino/bin/run-trino"]
HEALTHCHECK --interval=10s --timeout=5s --start-period=10s \
CMD /usr/lib/trino/bin/health-check
I would like to extend this Dockerfile and run a run a couple of instructions before running the main command in the Dockerfile? Not sure how to to that.
If you want to run those commands when the container starts, you can use an entrypoint to leave the original command untouched.
The exec $# will execute the arguments that were passed to the entrypoint with PID 1. Whatever arguments were provided as CMD, those will be in $#, so you essentially execute the CMD after the ENTRYPOINT, doesn't matter what this command might be.
Create an entrypoint script:
#!/usr/bin/env sh
# run some preperation
exec "$#"
And then you copy that into your build and use it.
FROM baseimage
COPY --chmod=755 ./entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
If you want to run those commands on build, use a FROM instruction and add your RUN instructions. The new image will use the CMD from the base image. So you don't need to set any CMD.
FROM baseimage
RUN more
RUN stuff
Since you can only have one CMD statement in a Dockerfile (if you have more than one, only the last one is executed), you need to get all your commands into a single CMD statement.
You can use the ability of the shell to chain commands using the && operator, like this
CMD my-first-command && \
my-second-command && \
/usr/lib/trino/bin/run-trino
That will run your commands first before running run-trino which is the current CMD.
Whenever you are using a base image, docker throw's away the last layer of the image. So you can extend that image, by writing your own image.
for example: this is my first image (that i get from a third party, just like you)
FROM alpine:latest
CMD echo "hello"
I want to extend it, to output hello world instead of hello, so I extend write another docker file like this
FROM first:latest
CMD echo "hello world"
and when I build the image and run it,
docker build -t second .
docker run second
I get my expected output
hello world
Hopefully that helps

Howto create docker image tar directly from build step?

I want to create a shareable image which can be docker loaded into another docker installation directly from docker build
e.g.
Dockerfile:
FROM my-app/env as builder
COPY /my-app /my-app
RUN /my-app/build.sh
FROM ubuntu:20.04
COPY --from=builder /my-app/build/my-app.exe /bin
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& apt-get install -y --no-install-recommends \
***some stuff required at runtime*** \
&& apt-get autoclean \
&& apt-get autoremove \
&& ldconfig
ENTRYPOINT ["my-app.exe"]
CMD ["--help"]
Running something like:
/my-app$ docker build -t my-app/exe .
will produce an image which I can use with
$ docker run my-app/exe
If I want to now share this, I need to do:
$ docker save -o my-app.tar my-app/exe
which creates the archive my-app.tar which can be shared with another system running the same arch/OS and used by:
/my-othersystem/my-app$ docker load < my-app.tar
And
$ docker run my-app/exe
now works on the other system.
HOWEVER
This requires building the image into the build system docker repository then saving the image to a file. I don't plan to run the exeutable on the build system so don't want it taking up space.
You can do:
/my-app$ DOCKER_BUILDKIT=1 docker build --output type=tar,dest=out.tar .
But this creates a FS, not an image. I want it to be exported as a docker image compatible with docker load directly, is this possible?

Run Python scripts on command line running Docker images

I built a docker image using Dockerfile with Python and some libraries inside (no my project code inside). In my local work dir, there are some scripts to be run on the docker. So, here what I did
$ cd /path/to/my_workdir
$ docker run -it --name test -v `pwd`:`pwd` -w `pwd` my/code:test python src/main.py --config=test --results-dir=/home/me/Results
The command python src/main.py --config=test --results-dir=/home/me/Results is what I want to run inside the Docker container.
However, it returns,
/home/docker/miniconda3/bin/python: /home/docker/miniconda3/bin/python: cannot execute binary file
How can I fix it and run my code?
Here is my Dockerfile
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
MAINTAINER Me <me#me.com>
RUN apt update -yq && \
apt install -yq curl wget unzip git vim cmake sudo
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b && rm Miniconda3-latest-Linux-x86_64.sh
ENV PATH /home/docker/miniconda3/bin:$PATH
Run pip install absl-py==0.5.0 atomicwrites==1.2.1 attrs==18.2.0 certifi==2018.8.24 chardet==3.0.4 cycler==0.10.0 docopt==0.6.2 enum34==1.1.6 future==0.16.0 idna==2.7 imageio==2.4.1 jsonpickle==1.2 kiwisolver==1.0.1 matplotlib==3.0.0 mock==2.0.0 more-itertools==4.3.0 mpyq==0.2.5 munch==2.3.2 numpy==1.15.2 pathlib2==2.3.2 pbr==4.3.0 Pillow==5.3.0 pluggy==0.7.1 portpicker==1.2.0 probscale==0.2.3 protobuf==3.6.1 py==1.6.0 pygame==1.9.4 pyparsing==2.2.2 pysc2==3.0.0 pytest==3.8.2 python-dateutil==2.7.3 PyYAML==3.13 requests==2.19.1 s2clientprotocol==4.10.1.75800.0 sacred==0.8.1 scipy==1.1.0 six==1.11.0 sk-video==1.1.10 snakeviz==1.0.0 tensorboard-logger==0.1.0 torch==0.4.1 torchvision==0.2.1 tornado==5.1.1 urllib3==1.23
USER docker
ENTRYPOINT ["/bin/bash"]
Try making the file executable before running it.
as John mentioned to do in the dockerfile
FROM python:latest
COPY src/main.py /usr/local/share/
RUN chmod +x /usr/local/share/src/main.py #<-**--- just add this also
# I have some doubts about the pathing
CMD ["/usr/local/share/src/main.py", "--config=test --results-dir=/home/me/Results"]
You can run a python script in docker by adding this to your docker file:
FROM python:latest
COPY src/main.py /usr/local/share/
CMD ["src/main.py", "--config=test --results-dir=/home/me/Results"]

Docker CMD doesn't see installed components

I am trying to build a docker image using the following docker file.
FROM ubuntu:latest
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Update packages
RUN apt-get -y update && apt-get install -y \
curl \
build-essential \
libssl-dev \
git \
&& rm -rf /var/lib/apt/lists/*
ENV APP_NAME testapp
ENV NODE_VERSION 5.10
ENV SERVE_PORT 8080
ENV LIVE_RELOAD_PORT 8888
# Install nvm, node, and angular
RUN (curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.1/install.sh | bash -) \
&& source /root/.nvm/nvm.sh \
&& nvm install $NODE_VERSION \
&& npm install -g angular-cli \
&& ng new $APP_NAME \
&& cd $APP_NAME \
&& npm run postinstall
EXPOSE $SERVE_PORT $LIVE_RELOAD_PORT
WORKDIR $APP_NAME
EXPOSE 8080
CMD ["node", "-v"]
But I keep getting an error when trying to run it:
docker: Error response from daemon: Container command 'node' not found or does not exist..
I know node is being properly installed because if I rebuild the image by commenting out the CMD line from the docker file
#CMD ["node", "-v"]
And then start a shell session
docker run -it testimage
I can see that all my dependencies are there and return proper results
node -v
v5.10.1
.....
ng -v
angular-cli: 1.0.0-beta.5
node: 5.10.1
os: linux x64
So my question is. Why is the CMD in Dockerfile not able to run these and how can I fix it?
When using the shell to RUN node via nvm, you have sourced the nvm.sh file and it will have a $PATH variable set in it's environment to search for executable files via nvm.
When you run commands via docker run it will only inject a default PATH
docker run <your-ubuntu-image> echo $PATH
docker run <your-ubuntu-image> which node
docker run <your-ubuntu-image> nvm which node
Specifying a CMD with an array execs a binary directly without a shell or a $PATH to lookup.
Provide the full path to your node binary.
CMD ["/bin/node","-v"]
It's better to use the node binary rather than the nvm helper scripts due to the way dockers signal processing works. It might be easier to use the node apt packages in docker rather than nvm.

RUN command not called in Dockerfile

Here is my Dockerfile:
# Pull base image (Ubuntu)
FROM dockerfile/python
# Install socat
RUN \
cd /opt && \
wget http://www.dest-unreach.org/socat/download/socat-1.7.2.4.tar.gz && \
tar xvzf socat-1.7.2.4.tar.gz && \
rm -f socat-1.7.2.4.tar.gz && \
cd socat-1.7.2.4 && \
./configure && \
make && \
make install
RUN \
start-stop-daemon --quiet --oknodo --start --pidfile /run/my.pid --background --make-pidfile \
--exec /opt/socat-1.7.2.4/socat PTY,link=/dev/ttyMY,echo=0,raw,unlink-close=0 \
TCP-LISTEN:9334,reuseaddr,fork
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
I build an image from this Dockerfile and run it like this:
docker run -d -t -i myaccount/socat_test
After this I login into the container and check if socat daemon is running. But it is not. I just started playing around with docker. Am I misunderstanding the concept of Dockerfile? I would expect docker to run the socat command when the container starts.
You are confusing RUN and CMD. The RUN command is meant to build up the docker container, as you correctly did with the first one. The second command will also executed when building the container, but not when running it. If you want to execute commands when the docker container is started, you ought to use the CMD command.
More information can be found in the Dockerfile reference. For instance, you could also use ENTRYPOINT in stead of CMD.

Resources