Activate conda env and run pip install requirements in the Dockfile - docker

I can create my virtual environment dev1 successfully, but I can not activate it and switch into in during the Docker building.
All I want is to switch the venv and install my dependencies in requirements.txt.
My code:
WORKDIR /APP
ADD . /APP
ARG CONDA_VENV=dev1
RUN conda create -y --name ${CONDA_VENV} python=3.7
RUN conda activate ${CONDA_VENV}
RUN pip install -r requirements.txt
Error:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
The command '/bin/sh -c conda activate ${CONDA_VENV}' returned a non-zero code: 1

Related

Docker RUN command near end of Dockerfile ... boots into container unless I give a CMD at the end but doesn't work either way. Any ideas?

I am working on a Dockerfile to be used with Google Cloud Run.
I'm not getting the command to run.
Here's the (slightly obfuscated) Dockerfile:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["gcloud", "compute", "ssh", "--internal-ip", "our-persist-cluster-py3-prod", "--zone=us-central1-b", "--project", "our-customer-tech-sem-prod", "--", "'ps -ef'", "|", "./checker2.py"]
This tries to run the CMD at the end, but says it can't find the host specified. (Runs fine from the command line outside Docker.)
There were a couple of things wrong at the end (1) typo in the host name (fixed with the help of a colleague) ... then I had to make the CMD into a shell script to get the pipe inside to work correctly.
Here's the final (slightly obfuscated) script:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
RUN mkdir /secrets
COPY secrets/* /secrets
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["./rungcloud.sh"]

Cannot tail .log files inside docker container

I have a docker container that runs multiple python scripts with pm2, each one of them has its own .log file and I want to tail them separately in order to debug them easily. The problem is when I run docker exec -it mycontainer /bin/sh and tail -f scriptFile.log inside the directory where the file is located only shows the first lines and it freezes until hit ctl + c. The image I'm using is nikolaik/python-nodejs:python3.9-nodejs16-alpine from enter link description here
Also tried docker logs tail -f container /app/logfile.log
Here is the dockerfile
FROM nikolaik/python-nodejs:python3.9-nodejs16-alpine
WORKDIR /app
# Installing pm2
RUN npm install pm2 -g
# Create prod env variable
ENV PROD=true
COPY . .
# Installing lxml
RUN apk add --update --no-cache g++ gcc libxslt-dev
# Installing requirements
RUN pip install -r requirements.txt
CMD ["pm2-runtime", "start", "ecosystem.config.js"]

when execute shell file in docker container, binary file error

I'm trying to execute shell file that contains python script.
But, I don't know why i met the error like this
file directory structure
/home/kwstat/workplace/analysis/report_me
home
kwstat
workplace
analysis
report_me
report_me.sh
python_file
python_code.py
...
$docker exec -it test /bin/bash -c "source /home/kwstat/workplace/analysis/report_me/report_me.sh"
# Error
/home/kwstat/workplace/analysis/report_me/report_me.sh: line 30: source: /usr/local/bin/python: cannot execute binary file
I tried several things in Dockerfile, But same error occured.
# 1.CMD ["/bin/bash","-l","-c"]
CMD ["/bin/bash","-l","-c"]
# 2. CMD bin/bash
CMD bin/bash
#########My Dockerfile#############
FROM continuumio/miniconda3
# System packages
RUN apt-get update && apt-get install -y curl
RUN apt-get update && apt-get install -y subversion
WORKDIR /home/kwstat/workplace/analysis/report_me
COPY environments.yml /home/kwstat/workplace/analysis/report_me/environments.yml
RUN conda env create -f /home/kwstat/workplace/analysis/report_me/environments.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
RUN echo "conda activate my_env" >> ~/.profile
# Activate the environment, and make sure it's activated:
#RUN echo "Make sure flask is installed:"
COPY requirements.txt /home/kwstat/me_report_dockerfile/requirements.txt
RUN pip install -r /home/kwstat/me_report_dockerfile/requirements.txt
WORKDIR /home/kwstat/workplace/analysis/report_me/python_file
COPY python_file/ /home/kwstat/workplace/analysis/report_me/python_file
WORKDIR /home/kwstat/workplace/analysis/report_me/
COPY report_me.sh ./report_me.sh
RUN chmod +x report_me.sh
CMD ["/bin/bash"]
please any help ~
my problem was from shell script.
Inside the shell set the coda env path
and all solved.

docker run interactive with conda environment already activated

I'd like to create a docker image such that when you run it interactively, a conda environment is already activated.
Current state:
docker run -it my_image
(base) root#1c32ba066db2:~# conda activate my_env
(my_env) root#1c32ba066db2:~#
Desired state:
docker run -it my_image
(my_env) root#1c32ba066db2:~#
More info:
In my Dockerfile, I include all the necessary RUN commands to install conda, create the environment, and activate the environment. Relevant portions reproduced below.
SHELL [ "/bin/bash", "--login", "-c" ]
...
# Install miniconda.
ENV CONDA_DIR $HOME/miniconda3
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p $CONDA_DIR && \
rm ~/miniconda.sh
# Make non-activate conda commands available.
ENV PATH=$CONDA_DIR/bin:$PATH
# Make conda activate command available from /bin/bash --login shells.
RUN echo ". $CONDA_DIR/etc/profile.d/conda.sh" >> ~/.profile
# Make conda activate command available from /bin/bash --interative shells.
RUN conda init bash
# Create and activate the environment.
RUN conda env create --force -f environment.yml
RUN conda activate my_env
When I run this, conda activate my_env seems to run and succeed. But when I enter interactively with docker run -it, the activated env is (base).
Additionally, I've tried having the last command be CMD conda activate my_env, but then it just runs that and does not enter interactive mode.
Each RUN statement (including docker run) is executed in a new shell, so one cannot simply activate an environment in a RUN command and expect it to continue being active in subsequent RUN commands.
Instead, you need to activate the environment as part of the shell initialization. The SHELL command has already been changed to include --login, which is great. Now you simply need to add conda activate my_env to .profile or .bashrc:
...
# Create and activate the environment.
RUN conda env create --force -f environment.yml
RUN echo "conda activate my_env" >> ~/.profile
and just be sure this is after the section added by Conda.
The following code in my Dockerfile does what you describe:
# Install anaconda
RUN cd $HOME && wget https://repo.anaconda.com/miniconda/Miniconda3-py38_4.10.3-Linux-x86_64.sh && bash Miniconda3-py38_4.10.3-Linux-x86_64.sh -b -p $HOME/miniconda
# Create env
RUN $HOME/miniconda/bin/conda init bash
RUN $HOME/miniconda/bin/conda env create -f my_env.yml
# Activate conda environment on startup
RUN echo "export PATH=$HOME/miniconda/bin:$PATH" >> $HOME/.bashrc
RUN echo "conda init bash" >> $HOME/.bashrc
RUN echo "conda activate my_env" >> $HOME/.bashrc
SHELL ["/bin/bash"]
results in:
(my_env) root#e5fe69843fa1:/#
when running an interactive container.
Remember to change all instances of my_env to the name of your conda environment.

How can I make jenkins run "pip install"?

I have a git repo and would like to get jenkins to clone it then run
virtualenv venv --distribute
/bin/bash venv/source/activate
pip install -r requirements.txt
python tests.py
The console output from jenkins is:
+ virtualenv venv --distribute
New python executable in venv/bin/python
Installing distribute..........................done.
Installing pip...............done.
+ /bin/bash venv/bin/activate
+ pip install -r requirements.txt
Downloading/unpacking flask (from -r requirements.txt (line 1))
Running setup.py egg_info for package flask
SNIP
creating /usr/local/lib/python2.7/dist-packages/flask
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
----------------------------------------
Command /usr/bin/python -c "import setuptools;__file__='/var/lib/jenkins/workspace/infatics-website/build/flask/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-hkdBAi-record/install-record.txt failed with error code 1
Storing complete log in /home/jenkins/.pip/pip.log
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I've tried adding sudo before the command, but it's doesn't work either:
+ sudo pip install -r requirements.txt
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Any ideas how to get around this? Also when I run pip install -r requirement.txt in a terminal as the jenkins user it doesn't need sudo permission. Can I get jenkins (the process) to run as the jenkins user?
The fact that you have to run use sudo to run pip is a big warning that your virtual environment isn't working. The build output shows that pip is installing the requirements in your system site-packages directory, which is not the way virtualenv works.
Your build script doesn't actually preserve the activated virtual environment. The environment variables set by the activate script are set in a child bash process and are not propagated up to the build script. You should source the activate script instead of running a separate shell:
virtualenv venv --distribute
. venv/bin/activate
pip install -r requirements.txt
python tests.py
If you're running this as one build step, that should work (and install your packages in venv). If you want to add more steps, you'll need to set the PATH environment variable in your other steps. You're probably better off providing full paths to pip and python to ensure you're not dependent on system package installations.
Try using
stage('test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'virtualenv venv && . venv/bin/activate && pip install -r requirements.txt && python tests.py'
}
}
I totally agree what has been said.... But to make this more "Jenkins"
Create a basic Project - and in the custom steps do something like this
PROJECT="Tree"
rm -Rf ~/Builds/$PROJECT
CODE_HOME=~/Builds/$PROJECT/code
PYENV_HOME=~/Builds/$PROJECT/python
export PYENV_HOME
export PYTHONPATH=""
echo "Creating new Python env"
/usr/local/bin/python3 -m venv --clear $PYENV_HOME
source $PYENV_HOME/bin/activate
echo "Get Project"
mkdir -p $CODE_HOME
cd $CODE_HOME
git clone https://github.com/MyUsername/MyTree.git .
pip install --upgrade pip
pip install nose
pip install coverage
pip install -r requirements.txt
python setup.py build
python setup.py install
After this you can then do you nose tests etc...
Here is what I did to make Jenkins execute pip install requirements on Windows machine:
In "Execute Windows batch command" pipeline
REM activate venv, update pip and install package
cmd /k "cd <path to your directory like C:\WebAPI> & .\venv\Scripts\activate.bat & python -m pip install -U pip & pip install -r .\requirements.txt"
cmd /k executes windows command prompt and you can add any commands there. & is a pipeline operator. So you can have multiple commands

Resources