docker run interactive with conda environment already activated - docker

I'd like to create a docker image such that when you run it interactively, a conda environment is already activated.
Current state:
docker run -it my_image
(base) root#1c32ba066db2:~# conda activate my_env
(my_env) root#1c32ba066db2:~#
Desired state:
docker run -it my_image
(my_env) root#1c32ba066db2:~#
More info:
In my Dockerfile, I include all the necessary RUN commands to install conda, create the environment, and activate the environment. Relevant portions reproduced below.
SHELL [ "/bin/bash", "--login", "-c" ]
...
# Install miniconda.
ENV CONDA_DIR $HOME/miniconda3
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p $CONDA_DIR && \
rm ~/miniconda.sh
# Make non-activate conda commands available.
ENV PATH=$CONDA_DIR/bin:$PATH
# Make conda activate command available from /bin/bash --login shells.
RUN echo ". $CONDA_DIR/etc/profile.d/conda.sh" >> ~/.profile
# Make conda activate command available from /bin/bash --interative shells.
RUN conda init bash
# Create and activate the environment.
RUN conda env create --force -f environment.yml
RUN conda activate my_env
When I run this, conda activate my_env seems to run and succeed. But when I enter interactively with docker run -it, the activated env is (base).
Additionally, I've tried having the last command be CMD conda activate my_env, but then it just runs that and does not enter interactive mode.

Each RUN statement (including docker run) is executed in a new shell, so one cannot simply activate an environment in a RUN command and expect it to continue being active in subsequent RUN commands.
Instead, you need to activate the environment as part of the shell initialization. The SHELL command has already been changed to include --login, which is great. Now you simply need to add conda activate my_env to .profile or .bashrc:
...
# Create and activate the environment.
RUN conda env create --force -f environment.yml
RUN echo "conda activate my_env" >> ~/.profile
and just be sure this is after the section added by Conda.

The following code in my Dockerfile does what you describe:
# Install anaconda
RUN cd $HOME && wget https://repo.anaconda.com/miniconda/Miniconda3-py38_4.10.3-Linux-x86_64.sh && bash Miniconda3-py38_4.10.3-Linux-x86_64.sh -b -p $HOME/miniconda
# Create env
RUN $HOME/miniconda/bin/conda init bash
RUN $HOME/miniconda/bin/conda env create -f my_env.yml
# Activate conda environment on startup
RUN echo "export PATH=$HOME/miniconda/bin:$PATH" >> $HOME/.bashrc
RUN echo "conda init bash" >> $HOME/.bashrc
RUN echo "conda activate my_env" >> $HOME/.bashrc
SHELL ["/bin/bash"]
results in:
(my_env) root#e5fe69843fa1:/#
when running an interactive container.
Remember to change all instances of my_env to the name of your conda environment.

Related

Using conda in Docker

I am using conda-forge in my Dockerfile in order to install a ready environment from conda forge repository. To activate the environment, there are a lot of packages to be installed in the conda-forge command.
Problem is that this is happening every time when I re-build the Docker image.
Is there some possibility to cache it, and not reinstalling it again on every build process?
Critical part of code:
ADD https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh Miniconda3-latest-Linux-x86_64.sh
RUN mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda init bash
RUN conda create -c conda-forge --name arosics python=3
RUN conda install -c conda-forge 'arosics>=1.3.0'
RUN echo "conda init bash" >> $HOME/.bashrc
RUN echo "conda activate arosics" >> $HOME/.bashrc
SHELL ["/bin/bash"]

when execute shell file in docker container, binary file error

I'm trying to execute shell file that contains python script.
But, I don't know why i met the error like this
file directory structure
/home/kwstat/workplace/analysis/report_me
home
kwstat
workplace
analysis
report_me
report_me.sh
python_file
python_code.py
...
$docker exec -it test /bin/bash -c "source /home/kwstat/workplace/analysis/report_me/report_me.sh"
# Error
/home/kwstat/workplace/analysis/report_me/report_me.sh: line 30: source: /usr/local/bin/python: cannot execute binary file
I tried several things in Dockerfile, But same error occured.
# 1.CMD ["/bin/bash","-l","-c"]
CMD ["/bin/bash","-l","-c"]
# 2. CMD bin/bash
CMD bin/bash
#########My Dockerfile#############
FROM continuumio/miniconda3
# System packages
RUN apt-get update && apt-get install -y curl
RUN apt-get update && apt-get install -y subversion
WORKDIR /home/kwstat/workplace/analysis/report_me
COPY environments.yml /home/kwstat/workplace/analysis/report_me/environments.yml
RUN conda env create -f /home/kwstat/workplace/analysis/report_me/environments.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
RUN echo "conda activate my_env" >> ~/.profile
# Activate the environment, and make sure it's activated:
#RUN echo "Make sure flask is installed:"
COPY requirements.txt /home/kwstat/me_report_dockerfile/requirements.txt
RUN pip install -r /home/kwstat/me_report_dockerfile/requirements.txt
WORKDIR /home/kwstat/workplace/analysis/report_me/python_file
COPY python_file/ /home/kwstat/workplace/analysis/report_me/python_file
WORKDIR /home/kwstat/workplace/analysis/report_me/
COPY report_me.sh ./report_me.sh
RUN chmod +x report_me.sh
CMD ["/bin/bash"]
please any help ~
my problem was from shell script.
Inside the shell set the coda env path
and all solved.

Activate conda env and run pip install requirements in the Dockfile

I can create my virtual environment dev1 successfully, but I can not activate it and switch into in during the Docker building.
All I want is to switch the venv and install my dependencies in requirements.txt.
My code:
WORKDIR /APP
ADD . /APP
ARG CONDA_VENV=dev1
RUN conda create -y --name ${CONDA_VENV} python=3.7
RUN conda activate ${CONDA_VENV}
RUN pip install -r requirements.txt
Error:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
The command '/bin/sh -c conda activate ${CONDA_VENV}' returned a non-zero code: 1

How to set environment variables dynamically by script in Dockerfile?

I build my project by Dockerfile. The project need to installation of Openvino. Openvino needs to set some environment variables dynamically by a script that depends on architecture. The sciprt is: script to set environment variables
As I learn, Dockerfile can't set enviroment variables to image from a script.
How do I follow way to solve the problem?
I need to set the variables because later I continue install opencv that looks the enviroment variables.
What I think that if I put the script to ~/.bashrc file to set variables when connect to bash, if I have any trick to start bash for a second, it could solve my problem.
Secondly I think that build openvino image, create container from that, connect it and initiliaze variables by running script manually in container. After that, convert the container to image. Create new Dockerfile and continue building steps by using this images for ongoing steps.
Openvino Dockerfile exp and line that run the script
My Dockerfile:
FROM ubuntu:18.04
ARG DOWNLOAD_LINK=http://registrationcenter-download.intel.com/akdlm/irc_nas/16612/l_openvino_toolkit_p_2020.2.120.tgz
ENV INSTALLDIR /opt/intel/openvino
# openvino download
RUN curl -LOJ "${DOWNLOAD_LINK}"
# opencv download
RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/4.3.0.zip && \
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.3.0.zip
RUN apt-get -y install sudo
# openvino installation
RUN tar -xvzf ./*.tgz && \
cd l_openvino_toolkit_p_2020.2.120 && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh -s silent.cfg && \
# rm -rf /tmp/* && \
sudo -E $INSTALLDIR/install_dependencies/install_openvino_dependencies.sh
WORKDIR /home/sa
RUN /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh" && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> /home/sa/.bashrc && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc && \
$INSTALLDIR/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
$INSTALLDIR/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN bash
# opencv installation
RUN unzip opencv.zip && \
unzip opencv_contrib.zip && \
# rm opencv.zip opencv_contrib.zip && \
mv opencv-4.3.0 opencv && \
mv opencv_contrib-4.3.0 opencv_contrib && \
cd ./opencv && \
mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_INF_ENGINE=ON -D ENABLE_CXX11=ON -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=OFF -D INSTALL_C_EXAMPLES=OFF -D ENABLE_PRECOMPILED_HEADERS=OFF -D OPENCV_ENABLE_NONFREE=ON -D OPENCV_EXTRA_MODULES_PATH=/home/sa/opencv_contrib/modules -D PYTHON_EXECUTABLE=/usr/bin/python3 -D WIDTH_GTK=ON -D BUILD_TESTS=OFF -D BUILD_DOCS=OFF -D WITH_GSTREAMER=OFF -D WITH_FFMPEG=ON -D BUILD_EXAMPLES=OFF .. && \
make && \
make install && \
ldconfig
You need to cause the shell to load that file in every RUN command where you use it, and also at container startup time.
For startup time, you can use an entrypoint wrapper script:
#!/bin/sh
# Load the script of environment variables
. /opt/intel/openvino/bin/setupvars.sh
# Run the main container command
exec "$#"
Then in the Dockerfile, you need to include the environment variable script in RUN commands, and make this script be the image's ENTRYPOINT.
RUN . /opt/intel/openvino/bin/setupvars.sh && \
/opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
/opt/intel/openvino/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN ... && \
. /opt/intel/openvino/bin/setupvars.sh && \
cmake ... && \
make && \
...
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD same as the command you set in the original image
If you docker exec debugging shells in the container, they won't see these environment variables and you'll need to manually re-read the environment variable script. If you use docker inspect to look at low-level details of the container, it also won't show the environment variables.
It looks like that script just sets a couple of environment variables (especially $LD_LIBRARY_PATH and $PYTHONPATH), if to somewhat long-winded values, and you could just set these with ENV statements in the Dockerfile.
If you look at the docker build output, there are lines like ---> 0123456789ab after each build step; those are valid image IDs that you can docker run. You could run
docker run --rm 0123456789ab \
env \
| sort > env-a
docker run --rm 0123456789ab \
sh -c '. /opt/intel/openvino/bin/setupvars.sh && env' \
| sort > env-b
This will give you two local files with the environment variables with and without running this setup script. Find the differences (say, with comm(1)), put ENV before each line, and add that to your Dockerfile.
You can't really use .bashrc in Docker. Many common paths don't invoke its startup files: in the language of that documentation, neither a Dockerfile RUN command nor a docker run instruction is an "interactive shell" so those don't read dot files, and usually docker run ... command doesn't invoke a shell at all.
You also don't need sudo (you are already running as root, and an interactive password prompt will fail); RUN sh -c is redundant (Docker inserts it on its own); and source isn't a standard shell command (prefer the standard ., which will work even on Alpine-based images that don't have shell extensions).

How to permanently change $PATH in docker container

I am building a docker image and want to permanently change its env vars and path. My simplified dockerfile is like this:
FROM python:3.6.8-slim-stretch
USER root
RUN pip3 install pyspark
RUN touch /etc/profile.d/set-up-env.sh && \
echo export SPARK_HOME='/usr/local/lib/python3.6/site-packages/pyspark' >> /etc/profile.d/set-up-env.sh && \
echo export PATH='${SPARK_HOME}/bin:${PATH}' >> /etc/profile.d/set-up-env.sh && \
echo export PYSPARK_PYTHON='python3.6' >> /etc/profile.d/set-up-env.sh && \
chmod +x /etc/profile.d/set-up-env.sh
The image can be built successfully with docker build -t data-job-base .
But when I run it docker run --rm -it data-job-base bash, in this running container SPARK_HOME is empty and PATH has no change. I cat /etc/profile.d/set-up-env.sh and can see that it is properly written:
export SPARK_HOME=/usr/local/lib/python3.6/site-packages/pyspark
export PATH=${SPARK_HOME}/bin:${PATH}
export PYSPARK_PYTHON=python3.6
I don't understand, why this set-up-env.sh doesn't get run when I start the shell?
Note that modifying /etc/environment has no effect either.
You can use the ENV directive in the Dockerfile
FROM python:3.6.8-slim-stretch
USER root
ENV SPARK_HOME=/usr/local/lib/python3.6/site-packages/pyspark \
PATH=${SPARK_HOME}/bin:${PATH} \
PYSPARK_PYTHON=python3.6
RUN pip3 install pyspark
OK, I have solved this by adding my shell commands in bash.bashrc instead of profile, profile.d, or environment.

Resources