I build my project by Dockerfile. The project need to installation of Openvino. Openvino needs to set some environment variables dynamically by a script that depends on architecture. The sciprt is: script to set environment variables
As I learn, Dockerfile can't set enviroment variables to image from a script.
How do I follow way to solve the problem?
I need to set the variables because later I continue install opencv that looks the enviroment variables.
What I think that if I put the script to ~/.bashrc file to set variables when connect to bash, if I have any trick to start bash for a second, it could solve my problem.
Secondly I think that build openvino image, create container from that, connect it and initiliaze variables by running script manually in container. After that, convert the container to image. Create new Dockerfile and continue building steps by using this images for ongoing steps.
Openvino Dockerfile exp and line that run the script
My Dockerfile:
FROM ubuntu:18.04
ARG DOWNLOAD_LINK=http://registrationcenter-download.intel.com/akdlm/irc_nas/16612/l_openvino_toolkit_p_2020.2.120.tgz
ENV INSTALLDIR /opt/intel/openvino
# openvino download
RUN curl -LOJ "${DOWNLOAD_LINK}"
# opencv download
RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/4.3.0.zip && \
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.3.0.zip
RUN apt-get -y install sudo
# openvino installation
RUN tar -xvzf ./*.tgz && \
cd l_openvino_toolkit_p_2020.2.120 && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh -s silent.cfg && \
# rm -rf /tmp/* && \
sudo -E $INSTALLDIR/install_dependencies/install_openvino_dependencies.sh
WORKDIR /home/sa
RUN /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh" && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> /home/sa/.bashrc && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc && \
$INSTALLDIR/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
$INSTALLDIR/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN bash
# opencv installation
RUN unzip opencv.zip && \
unzip opencv_contrib.zip && \
# rm opencv.zip opencv_contrib.zip && \
mv opencv-4.3.0 opencv && \
mv opencv_contrib-4.3.0 opencv_contrib && \
cd ./opencv && \
mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_INF_ENGINE=ON -D ENABLE_CXX11=ON -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=OFF -D INSTALL_C_EXAMPLES=OFF -D ENABLE_PRECOMPILED_HEADERS=OFF -D OPENCV_ENABLE_NONFREE=ON -D OPENCV_EXTRA_MODULES_PATH=/home/sa/opencv_contrib/modules -D PYTHON_EXECUTABLE=/usr/bin/python3 -D WIDTH_GTK=ON -D BUILD_TESTS=OFF -D BUILD_DOCS=OFF -D WITH_GSTREAMER=OFF -D WITH_FFMPEG=ON -D BUILD_EXAMPLES=OFF .. && \
make && \
make install && \
ldconfig
You need to cause the shell to load that file in every RUN command where you use it, and also at container startup time.
For startup time, you can use an entrypoint wrapper script:
#!/bin/sh
# Load the script of environment variables
. /opt/intel/openvino/bin/setupvars.sh
# Run the main container command
exec "$#"
Then in the Dockerfile, you need to include the environment variable script in RUN commands, and make this script be the image's ENTRYPOINT.
RUN . /opt/intel/openvino/bin/setupvars.sh && \
/opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
/opt/intel/openvino/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN ... && \
. /opt/intel/openvino/bin/setupvars.sh && \
cmake ... && \
make && \
...
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD same as the command you set in the original image
If you docker exec debugging shells in the container, they won't see these environment variables and you'll need to manually re-read the environment variable script. If you use docker inspect to look at low-level details of the container, it also won't show the environment variables.
It looks like that script just sets a couple of environment variables (especially $LD_LIBRARY_PATH and $PYTHONPATH), if to somewhat long-winded values, and you could just set these with ENV statements in the Dockerfile.
If you look at the docker build output, there are lines like ---> 0123456789ab after each build step; those are valid image IDs that you can docker run. You could run
docker run --rm 0123456789ab \
env \
| sort > env-a
docker run --rm 0123456789ab \
sh -c '. /opt/intel/openvino/bin/setupvars.sh && env' \
| sort > env-b
This will give you two local files with the environment variables with and without running this setup script. Find the differences (say, with comm(1)), put ENV before each line, and add that to your Dockerfile.
You can't really use .bashrc in Docker. Many common paths don't invoke its startup files: in the language of that documentation, neither a Dockerfile RUN command nor a docker run instruction is an "interactive shell" so those don't read dot files, and usually docker run ... command doesn't invoke a shell at all.
You also don't need sudo (you are already running as root, and an interactive password prompt will fail); RUN sh -c is redundant (Docker inserts it on its own); and source isn't a standard shell command (prefer the standard ., which will work even on Alpine-based images that don't have shell extensions).
Related
I have a docker file as below.
FROM registry.access.redhat.com/ubi8/ubi
ENV JAVA_HOME /usr/lib/jvm/zulu11
RUN \
set -xeu && \
yum -y -q install https://cdn.azul.com/zulu/bin/zulu-repo-1.0.0-1.noarch.rpm && \
yum -y -q install python3 zulu11-jdk less && \
yum -q clean all && \
rm -rf /var/cache/yum && \
alternatives --set python /usr/bin/python3 && \
groupadd trino --gid 1000 && \
useradd trino --uid 1000 --gid 1000 && \
mkdir -p /usr/lib/trino /data/trino && \
chown -R "trino:trino" /usr/lib/trino /data/trino
ARG TRINO_VERSION
COPY trino-cli-${TRINO_VERSION}-executable.jar /usr/bin/trino
COPY --chown=trino:trino trino-server-${TRINO_VERSION} /usr/lib/trino
COPY --chown=trino:trino default/etc /etc/trino
EXPOSE 8080
USER trino:trino
ENV LANG en_US.UTF-8
CMD ["/usr/lib/trino/bin/run-trino"]
HEALTHCHECK --interval=10s --timeout=5s --start-period=10s \
CMD /usr/lib/trino/bin/health-check
I would like to extend this Dockerfile and run a run a couple of instructions before running the main command in the Dockerfile? Not sure how to to that.
If you want to run those commands when the container starts, you can use an entrypoint to leave the original command untouched.
The exec $# will execute the arguments that were passed to the entrypoint with PID 1. Whatever arguments were provided as CMD, those will be in $#, so you essentially execute the CMD after the ENTRYPOINT, doesn't matter what this command might be.
Create an entrypoint script:
#!/usr/bin/env sh
# run some preperation
exec "$#"
And then you copy that into your build and use it.
FROM baseimage
COPY --chmod=755 ./entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
If you want to run those commands on build, use a FROM instruction and add your RUN instructions. The new image will use the CMD from the base image. So you don't need to set any CMD.
FROM baseimage
RUN more
RUN stuff
Since you can only have one CMD statement in a Dockerfile (if you have more than one, only the last one is executed), you need to get all your commands into a single CMD statement.
You can use the ability of the shell to chain commands using the && operator, like this
CMD my-first-command && \
my-second-command && \
/usr/lib/trino/bin/run-trino
That will run your commands first before running run-trino which is the current CMD.
Whenever you are using a base image, docker throw's away the last layer of the image. So you can extend that image, by writing your own image.
for example: this is my first image (that i get from a third party, just like you)
FROM alpine:latest
CMD echo "hello"
I want to extend it, to output hello world instead of hello, so I extend write another docker file like this
FROM first:latest
CMD echo "hello world"
and when I build the image and run it,
docker build -t second .
docker run second
I get my expected output
hello world
Hopefully that helps
I have a base Docker image:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& cp /root/.bashrc /opt/conda/bashrc \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
and derive from it another one:
ARG BASE
FROM $BASE
RUN source /opt/conda/bashrc && micromamba activate \
&& micromamba create --file environment.yaml -p /env
While building the second image I get the following error: micromamba: command not found for the RUN section.
If I run 1st base image manually I can launch micromamba, it is running correctly
I can run temporary image which were created for 2nd image building, micromamba is available via CLI, running correctly.
If I inherit from debian:buster, or alpine, for example, it is building perfectly.
What a problem with the Ubuntu? Why it cannot see micromamba during 2nd Docker image building?
PS using scaffold for building, so it can understand correctly, where is $BASE and what is it.
The ubuntu:21.04 image comes with a /root/.bashrc file that begins with:
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
When the second Dockerfile executes RUN source /opt/conda/bashrc, PS1 is not set and thus the remainder of the bashrc file does not execute. The remainder of the bashrc file is where micromamba initialization occurs, including the setup of the micromamba bash function that is used to activate a micromamba environment.
The debian:buster image has a smaller /root/.bashrc that does not have a line similar to [ -z "$PS1" ] && return and therefore the micromamba function gets loaded.
The alpine image does not come with a /root/.bashrc so it also does not contain the code to exit the file early.
If you want to use the ubuntu:21.04 image, you could modify you first Dockerfile like this:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& grep -v '[ -z "\$PS1" ] && return' /root/.bashrc > /opt/conda/bashrc # this line has been modified \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
This will strip out the one line that causes the early termination.
Alternatively, you could make use of the existing mambaorg/micromamba docker image. The mambaorg/micromamba:latest is based on debian:slim, but mambaorg/micromamba:jammy will get you a ubuntu-based image (disclosure: I maintain this image).
I'm just testing out Docker so this might be a pretty simple question but I cannot seem to find out why it's not doing what I expect.
I created a pretty simple Dockerfile for testing, just to build a simple image that installs some packages, clones a git repo and build its requirements:
FROM ubuntu:18.04
ENV PYTHONEXEC=python3 \
PIPEXEC=pip \
VIRTUALENVEXEC=virtualenv \
GITREPO=https://github.com/test/test.git \
REPODIR=test
RUN apt-get update && apt-get install -y git \
python3 \
python3-dev \
python3-virtualenv \
python-virtualenv \
qt5-default \
libcurl4-openssl-dev \
libxml2 \
libxml2-dev \
libxslt1-dev \
libssl-dev \
virt-viewer
RUN mkdir -p /app
WORKDIR /app
RUN git clone $GITREPO $REPODIR \
&& $VIRTUALENVEXEC -p $PYTHONEXEC venv \
&& . venv/bin/activate \
&& cd $REPODIR \
&& $PIPEXEC install -r requirements.txt
CMD ["sleep", "1000000"]
Then I build the image with:
docker build -t gitapp:latest .
This works so far. However, if I specify a -e parameter on the docker container run command, it seems not to replace it in the last RUN command.
So if I run docker container run -d -e "REPODIR=blah" gitapp, I expect it to be cloned in /app/blah, but it's still cloned in the /app/test directory.
When I run a docker container exec -it -e "REPODIR=blah" <container-id> env I get:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=2f6ba38341d6
TERM=xterm
REPODIR=blah
PYTHONEXEC=python3
PIPEXEC=pip
VIRTUALENVEXEC=virtualenv
GITREPO=https://github.com/test/test.git
HOME=/root
So it seems that the variable is indeed passed to the container. Then why it isn't passed to the last RUN command so it clones the repo in the right directory? Am I missing something basic here?
When you execute a docker run you are instructing a container to execute Dockerfile's CMD or ENTRYPOINT command. Dockerfile commands that are above entrypoint have been already executed during build and are not executing again at runtime.
That's exactly the reason your github repo is being cloned to the directory defined initially at the Dockerfile and not in the one passed at the run command with -e flag.
A workaround would be to alter your image's entrypoint. You may transfer this part
RUN git clone $GITREPO $REPODIR \
&& $VIRTUALENVEXEC -p $PYTHONEXEC venv \
&& . venv/bin/activate \
&& cd $REPODIR \
&& $PIPEXEC install -r requirements.txt
to a bash script(let's call it my.script.sh) file that will be executed as image's entrypoint. Copy this file during build process in a preferred location, ensuring it holds executable flag and edit your Dockerfile's entrypoint accordingly:
CMD ["/path_to_script/myscript.sh" ]
This however has the caveat that the script will be executed each time the container is started in contrast with your current setup, possibly leading to delay depending on myscript.sh content.
I am building a docker image and want to permanently change its env vars and path. My simplified dockerfile is like this:
FROM python:3.6.8-slim-stretch
USER root
RUN pip3 install pyspark
RUN touch /etc/profile.d/set-up-env.sh && \
echo export SPARK_HOME='/usr/local/lib/python3.6/site-packages/pyspark' >> /etc/profile.d/set-up-env.sh && \
echo export PATH='${SPARK_HOME}/bin:${PATH}' >> /etc/profile.d/set-up-env.sh && \
echo export PYSPARK_PYTHON='python3.6' >> /etc/profile.d/set-up-env.sh && \
chmod +x /etc/profile.d/set-up-env.sh
The image can be built successfully with docker build -t data-job-base .
But when I run it docker run --rm -it data-job-base bash, in this running container SPARK_HOME is empty and PATH has no change. I cat /etc/profile.d/set-up-env.sh and can see that it is properly written:
export SPARK_HOME=/usr/local/lib/python3.6/site-packages/pyspark
export PATH=${SPARK_HOME}/bin:${PATH}
export PYSPARK_PYTHON=python3.6
I don't understand, why this set-up-env.sh doesn't get run when I start the shell?
Note that modifying /etc/environment has no effect either.
You can use the ENV directive in the Dockerfile
FROM python:3.6.8-slim-stretch
USER root
ENV SPARK_HOME=/usr/local/lib/python3.6/site-packages/pyspark \
PATH=${SPARK_HOME}/bin:${PATH} \
PYSPARK_PYTHON=python3.6
RUN pip3 install pyspark
OK, I have solved this by adding my shell commands in bash.bashrc instead of profile, profile.d, or environment.
Here is my Dockerfile:
# Pull base image (Ubuntu)
FROM dockerfile/python
# Install socat
RUN \
cd /opt && \
wget http://www.dest-unreach.org/socat/download/socat-1.7.2.4.tar.gz && \
tar xvzf socat-1.7.2.4.tar.gz && \
rm -f socat-1.7.2.4.tar.gz && \
cd socat-1.7.2.4 && \
./configure && \
make && \
make install
RUN \
start-stop-daemon --quiet --oknodo --start --pidfile /run/my.pid --background --make-pidfile \
--exec /opt/socat-1.7.2.4/socat PTY,link=/dev/ttyMY,echo=0,raw,unlink-close=0 \
TCP-LISTEN:9334,reuseaddr,fork
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
I build an image from this Dockerfile and run it like this:
docker run -d -t -i myaccount/socat_test
After this I login into the container and check if socat daemon is running. But it is not. I just started playing around with docker. Am I misunderstanding the concept of Dockerfile? I would expect docker to run the socat command when the container starts.
You are confusing RUN and CMD. The RUN command is meant to build up the docker container, as you correctly did with the first one. The second command will also executed when building the container, but not when running it. If you want to execute commands when the docker container is started, you ought to use the CMD command.
More information can be found in the Dockerfile reference. For instance, you could also use ENTRYPOINT in stead of CMD.