I am using conda-forge in my Dockerfile in order to install a ready environment from conda forge repository. To activate the environment, there are a lot of packages to be installed in the conda-forge command.
Problem is that this is happening every time when I re-build the Docker image.
Is there some possibility to cache it, and not reinstalling it again on every build process?
Critical part of code:
ADD https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh Miniconda3-latest-Linux-x86_64.sh
RUN mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda init bash
RUN conda create -c conda-forge --name arosics python=3
RUN conda install -c conda-forge 'arosics>=1.3.0'
RUN echo "conda init bash" >> $HOME/.bashrc
RUN echo "conda activate arosics" >> $HOME/.bashrc
SHELL ["/bin/bash"]
Related
I have the following Dockerfile:
FROM --platform=linux/x86_64 nvidia/cuda:11.7.0-devel-ubuntu20.04
COPY app ./app
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y upgrade && apt-get install -y apt-utils
RUN apt-get install -y \
net-tools iputils-ping \
build-essential cmake git \
curl wget vim \
zip p7zip-full p7zip-rar \
imagemagick ffmpeg \
libomp5
# RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
COPY Miniconda3-latest-Linux-x86_64.sh .
RUN chmod guo+x Miniconda3-latest-Linux-x86_64.sh
RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3
RUN export PATH=~/miniconda3/bin:$PATH
RUN conda --version
RUN conda update -n base conda
RUN conda create -y --name servier python=3.6
RUN conda activate servier
RUN conda install -c conda-forge rdkit
CMD ["bash"]
When I run: docker image build -t image_test_cuda2 . it breaks in the RUN conda --version.
The error is RUN conda --version: ... /bin/sh: 1: conda: not found. The problem is that RUN export PATH=~/miniconda3/bin:$PATH is not working. It is not creating conda link in the PATH.
If I build the image until RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3 and manually I get access to the container using docker exec -it <id> /bin/bash and then from the #/ manually I run the same command #/export PATH=~/miniconda3/bin:$PATH it works good. If I manually run the next command inside the container RUN conda update -n base conda it works good.
The conclusion is that it seems that the command RUN export PATH=~/miniconda3/bin:$PATH is not working in Dockerfile - docker image build. How to solve this issue?
I'd like to create a docker image such that when you run it interactively, a conda environment is already activated.
Current state:
docker run -it my_image
(base) root#1c32ba066db2:~# conda activate my_env
(my_env) root#1c32ba066db2:~#
Desired state:
docker run -it my_image
(my_env) root#1c32ba066db2:~#
More info:
In my Dockerfile, I include all the necessary RUN commands to install conda, create the environment, and activate the environment. Relevant portions reproduced below.
SHELL [ "/bin/bash", "--login", "-c" ]
...
# Install miniconda.
ENV CONDA_DIR $HOME/miniconda3
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p $CONDA_DIR && \
rm ~/miniconda.sh
# Make non-activate conda commands available.
ENV PATH=$CONDA_DIR/bin:$PATH
# Make conda activate command available from /bin/bash --login shells.
RUN echo ". $CONDA_DIR/etc/profile.d/conda.sh" >> ~/.profile
# Make conda activate command available from /bin/bash --interative shells.
RUN conda init bash
# Create and activate the environment.
RUN conda env create --force -f environment.yml
RUN conda activate my_env
When I run this, conda activate my_env seems to run and succeed. But when I enter interactively with docker run -it, the activated env is (base).
Additionally, I've tried having the last command be CMD conda activate my_env, but then it just runs that and does not enter interactive mode.
Each RUN statement (including docker run) is executed in a new shell, so one cannot simply activate an environment in a RUN command and expect it to continue being active in subsequent RUN commands.
Instead, you need to activate the environment as part of the shell initialization. The SHELL command has already been changed to include --login, which is great. Now you simply need to add conda activate my_env to .profile or .bashrc:
...
# Create and activate the environment.
RUN conda env create --force -f environment.yml
RUN echo "conda activate my_env" >> ~/.profile
and just be sure this is after the section added by Conda.
The following code in my Dockerfile does what you describe:
# Install anaconda
RUN cd $HOME && wget https://repo.anaconda.com/miniconda/Miniconda3-py38_4.10.3-Linux-x86_64.sh && bash Miniconda3-py38_4.10.3-Linux-x86_64.sh -b -p $HOME/miniconda
# Create env
RUN $HOME/miniconda/bin/conda init bash
RUN $HOME/miniconda/bin/conda env create -f my_env.yml
# Activate conda environment on startup
RUN echo "export PATH=$HOME/miniconda/bin:$PATH" >> $HOME/.bashrc
RUN echo "conda init bash" >> $HOME/.bashrc
RUN echo "conda activate my_env" >> $HOME/.bashrc
SHELL ["/bin/bash"]
results in:
(my_env) root#e5fe69843fa1:/#
when running an interactive container.
Remember to change all instances of my_env to the name of your conda environment.
I built a docker image using Dockerfile with Python and some libraries inside (no my project code inside). In my local work dir, there are some scripts to be run on the docker. So, here what I did
$ cd /path/to/my_workdir
$ docker run -it --name test -v `pwd`:`pwd` -w `pwd` my/code:test python src/main.py --config=test --results-dir=/home/me/Results
The command python src/main.py --config=test --results-dir=/home/me/Results is what I want to run inside the Docker container.
However, it returns,
/home/docker/miniconda3/bin/python: /home/docker/miniconda3/bin/python: cannot execute binary file
How can I fix it and run my code?
Here is my Dockerfile
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
MAINTAINER Me <me#me.com>
RUN apt update -yq && \
apt install -yq curl wget unzip git vim cmake sudo
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b && rm Miniconda3-latest-Linux-x86_64.sh
ENV PATH /home/docker/miniconda3/bin:$PATH
Run pip install absl-py==0.5.0 atomicwrites==1.2.1 attrs==18.2.0 certifi==2018.8.24 chardet==3.0.4 cycler==0.10.0 docopt==0.6.2 enum34==1.1.6 future==0.16.0 idna==2.7 imageio==2.4.1 jsonpickle==1.2 kiwisolver==1.0.1 matplotlib==3.0.0 mock==2.0.0 more-itertools==4.3.0 mpyq==0.2.5 munch==2.3.2 numpy==1.15.2 pathlib2==2.3.2 pbr==4.3.0 Pillow==5.3.0 pluggy==0.7.1 portpicker==1.2.0 probscale==0.2.3 protobuf==3.6.1 py==1.6.0 pygame==1.9.4 pyparsing==2.2.2 pysc2==3.0.0 pytest==3.8.2 python-dateutil==2.7.3 PyYAML==3.13 requests==2.19.1 s2clientprotocol==4.10.1.75800.0 sacred==0.8.1 scipy==1.1.0 six==1.11.0 sk-video==1.1.10 snakeviz==1.0.0 tensorboard-logger==0.1.0 torch==0.4.1 torchvision==0.2.1 tornado==5.1.1 urllib3==1.23
USER docker
ENTRYPOINT ["/bin/bash"]
Try making the file executable before running it.
as John mentioned to do in the dockerfile
FROM python:latest
COPY src/main.py /usr/local/share/
RUN chmod +x /usr/local/share/src/main.py #<-**--- just add this also
# I have some doubts about the pathing
CMD ["/usr/local/share/src/main.py", "--config=test --results-dir=/home/me/Results"]
You can run a python script in docker by adding this to your docker file:
FROM python:latest
COPY src/main.py /usr/local/share/
CMD ["src/main.py", "--config=test --results-dir=/home/me/Results"]
I'm experimenting for the first time to try to create a docker container to run ROS. I am getting a confusing error and I cant figure out how to trouble
bash-3.2$ docker run -ti --name turtlebot3 rosdocker To run a command
as administrator (user "root"), use "sudo <command>". See "man
sudo_root" for details.
bash: /home/ros/catkin_ws/devel/setup.bash: No such file or directory
I am creating rosdocker with this dockerfile, from inside vscode. I am using the Docker plugin and using the "Build Image" command. Here's the Dockerfile:
FROM ros:kinetic-robot-xenial
RUN apt-get update && apt-get install --assume-yes \sudo \
python-pip \
ros-kinetic-desktop-full \
ros-kinetic-turtlebot3 \
ros-kinetic-turtlebot3-bringup \
ros-kinetic-turtlebot3-description \
ros-kinetic-turtlebot3-fake \
ros-kinetic-turtlebot3-gazebo \
ros-kinetic-turtlebot3-msgs \
ros-kinetic-turtlebot3-navigation \
ros-kinetic-turtlebot3-simulations \
ros-kinetic-turtlebot3-slam \
ros-kinetic-turtlebot3-teleop
# install python packages
RUN pip install -U scikit-learn numpy scipy
RUN pip install --upgrade pip
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
USER $USERNAME
# create catkin_ws
RUN mkdir /home/$USERNAME/catkin_ws
WORKDIR /home/$USERNAME/catkin_ws
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
I am not sure where the error is coming from and I don't know how to debug or troubleshoot it. I would appreciate any pointers!
You are creating an user ros and then in the last line doing this:
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
So obviously, system will look for "/home/ros/catkin_ws/devel/setup.bash" which is not created any where inside docker file.
Either create this file or if you are planning to mount from host to docker, then run with
docker run -ti --name turtlebot3 rosdocker -v sourcevolume:destinationvolume
I want to run nvm with docker exec
something like
docker run -d <image>
docker exec <container> nvm use v6.13.0 && npm install
but I have an error
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"nvm\": executable file not found in $PATH": unknown
I know that I can do something like that which work
docker exec <container> /bin/bash -c 'source "$NVM_DIR"/nvm.sh && nvm use v6.13.0'
But I don't want. Why ? because the point is to create a docker container usable with all my project with different version of python and node and run the nvm use <version> && npm install directly from gitlab-ci using the .nvmrc file into my project
my gitlab-cy.yml run a makefile which basically run the nvm use and npm install
image: cracky5457/nvm-pyenv-yarn
stages:
- install
- test
variables:
GITLAB_CACHING: "true"
cache:
paths:
- pip-cache/
key: "python_2.7"
installing:
stage: install
script:
- make install
artifacts:
paths:
- venv/
- node_modules/
expire_in: 1 hour
tags:
- docker-runner
and I don't want to push /bin/bash -c into my makefile because the project will become docker dependent locally
This is my docker image with the instructions to run it ( you have to create a file base_dependencies.txt, node-versions.txt, python-versions.txt ) or you can just docker pull cracky5457/nvm-pyenv-yarn
https://hub.docker.com/r/cracky5457/nvm-pyenv-yarn/
FROM phusion/baseimage:0.10.0
# Make sure bash is the standard shell
RUN rm /bin/sh && ln -sf /bin/bash /bin/sh
ENV ENV ~/.profile
ENV PYENV_ROOT /root/.pyenv
ENV PATH $PYENV_ROOT/shims:$PYENV_ROOT/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH
# Add yarn registry
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Install base system libraries.
ENV DEBIAN_FRONTEND=noninteractive
COPY base_dependencies.txt /base_dependencies.txt
RUN apt-get update && \
apt-get install -y $(cat /base_dependencies.txt)
# Install pyenv and default python version.
ENV PYTHONDONTWRITEBYTECODE true
RUN git clone https://github.com/yyuu/pyenv.git /root/.pyenv && \
cd /root/.pyenv && \
git checkout `git describe --abbrev=0 --tags` && \
eval "$(pyenv init -)"
# Install nvm and default node version.
ENV NVM_DIR /usr/local/nvm
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash && \
echo 'source $NVM_DIR/nvm.sh' >> /etc/profile
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install python and node versions
COPY python-versions.txt /python-versions.txt
RUN for version in $(cat python-versions.txt); do pyenv install $version; pyenv global $version; pip install virtualenv; done
COPY node-versions.txt /node-versions.txt
RUN for version in $(cat node-versions.txt); do source $NVM_DIR/nvm.sh; nvm install $version; done
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
I didn't found a proper way.
You can create a bash file into /usr/bin/nvm with chmod +x /usr/bin/nvm
#!/bin/bash
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
nvm "$#"
And then
docker exec <container> nvm use
But it's tricky and I can't add an other instruction in my exec, for exemple I can't docker exec <container> nvm use && npm install at the same time.
But I finally fixed my issue directly in gitlab-ci.yaml using
$(NVM_DIR)/nvm.sh && nvm use && npm install