Jenkinsfile- virtualenv command not found - jenkins

I have a local jenkinsserver thats running on http://localhost:8080 in my personal MAC machine.
Now, I have created a sample Jenkinsfile and trying to run through Jenkins job.
stage ('Install_Requirements') {
steps {
sh """
echo ${SHELL}
[ -d venv ] && rm -rf venv
#virtualenv --python=python2.7 venv
virtualenv venv
#. venv/bin/activate
export PATH=${VIRTUAL_ENV}/bin:${PATH}
pip install --upgrade pip
pip install -r requirements.txt -r dev-requirements.txt
make clean
"""
}
}
There are multiple stages in the projects but when I execute this 'Install_Requirements' stage I get a error :-
[python-jenkinsfile-testing] Running shell script
+ echo /bin/bash
/bin/bash
+ '[' -d venv ']'
+ virtualenv venv
/Users/Shared/Jenkins/Home/workspace/python-jenkinsfile-testing#tmp/durable-4363725f/script.sh: line 6: virtualenv: command not found
When I try to execute virtualenv venv command on my terminal it creates the venv and I'm also able to activate the environment.
Not sure why I get this error.

Related

Using conda in Docker

I am using conda-forge in my Dockerfile in order to install a ready environment from conda forge repository. To activate the environment, there are a lot of packages to be installed in the conda-forge command.
Problem is that this is happening every time when I re-build the Docker image.
Is there some possibility to cache it, and not reinstalling it again on every build process?
Critical part of code:
ADD https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh Miniconda3-latest-Linux-x86_64.sh
RUN mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda init bash
RUN conda create -c conda-forge --name arosics python=3
RUN conda install -c conda-forge 'arosics>=1.3.0'
RUN echo "conda init bash" >> $HOME/.bashrc
RUN echo "conda activate arosics" >> $HOME/.bashrc
SHELL ["/bin/bash"]

How to run requirments.txt through jenkins using docker?

Hi i am new to Jenkins i have repository i clone it first , then i need to build the image using Dockerfile it also include the requirments.txt. When i run requirments.txt file from my build.sh file it works fine but when i run it through jenkins using pipeline like this :
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'sudo bash /home/salman/mlops/build.sh'
}
}
}
}
it throw this error:
Step 1/6 : FROM ubuntu:latest
---> 9873176a8ff5
Step 2/6 : RUN apt-get update && apt-get install -y python3-pip python3-dev && cd /usr/local/bin && ln -s /usr/bin/python3 python && pip3 install --upgrade pip
---> Using cache
---> e5f345c76347
Step 3/6 : WORKDIR mlcode
---> Using cache
---> 273bf7d60f31
Step 4/6 : COPY requirments.txt .
COPY failed: file not found in build context or excluded by .dockerignore: stat requirments.txt: file does not exist
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
even i check the path of requirments.txt file its correct but it didn't run from jenkins. This is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
WORKDIR mlcode
COPY requirments.txt .
RUN pip install --no-cache-dir -r requirments.txt
CMD ["python3", "/mlcode/code.py"]
and this is my build.sh:
if cat /home/salman/mlops/Mlops/mlcode/code.py | grep sklearn ; then
sudo docker build -t mlcontainer -f /home/salman/mlops/Dockerfile .
elif cat /home/salman/mlops/Mlops/mlcode/code.py | grep keras ; then
sudo docker build -t dlcontainer -f /home/salman/mlops/Dockerfile .
else
echo "Code not found";
fi

Can not get asdf-direnv work in docker when building

I'm trying to use asdf-direnv in docker. Following the README of asdf-direnv, I make this Dockerfile
FROM nvidia/cuda:10.2-devel-ubuntu18.04
RUN chsh -s /bin/bash
SHELL ["/bin/bash", "-ic", "-l"]
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
# Python
RUN apt-get install -y make build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev
# Utils
RUN apt-get install -y git
RUN apt-get clean
WORKDIR /venv
RUN git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.8.1
RUN echo ". $HOME/.asdf/asdf.sh" >> ~/.bashrc
RUN echo ". $HOME/.asdf/completions/asdf.bash" >> ~/.bashrc
RUN asdf plugin add direnv
RUN asdf install direnv 2.28.0
RUN asdf local direnv 2.28.0
RUN echo "eval \"\$(asdf exec direnv hook bash)\"" >> ~/.bashrc
RUN echo "direnv() { asdf exec direnv \"\$#\"; }" >> ~/.bashrc
RUN mkdir -p ~/.config/direnv/
RUN echo "source \"\$(asdf direnv hook asdf)\"" >> ~/.config/direnv/direnvrc
RUN echo "export DIRENV_LOG_FORMAT=" >> ~/.config/direnv/direnvrc
RUN asdf plugin add python
RUN asdf install python 3.8.7
RUN asdf local python 3.8.7
RUN echo "use asdf" >> .envrc
RUN echo "layout python" >> .envrc
RUN direnv allow
RUN echo $(which python)
CMD ["/bin/bash"]
The issue is, the line RUN echo $(which python) works properly when I run but not when I build the image. I got
/root/.asdf/shims/python when building docker build . -t venv-gpu -f docker-gpu/Dockerfile
/venv/.direnv/python-3.8.7/bin/python when docker run --gpus all -it venv-gpu
How can I fixed this?

Dockerfile: Python3 not found

I am trying to convert a bash script to a Dockerfile since we are going the containerization route with AWS Batch
Basically I install CPLEX (an optimization library) and Anaconda, install some related packages, check if my environment it good to go, and then kick off a shell script to run the batch job.
Here is a snippet of my Dockerfile:
FROM amazonlinux:latest
# Download packages for container
RUN yum update -y
RUN yum -y install which unzip aws-cli \
RUN yum install -y tar.x86_64
RUN yum install gzip -y
RUN yum install ncompress -y
RUN yum -y install wget
RUN yum install -y nano
# Set working directory
WORKDIR /setup
#: Copy CPLEX installer binary and installation script.
COPY cplex_odee1210.linux-x86-64.bin /setup/
COPY cplex_installer_input.sh /setup/
#: Install CPLEX and update .bashrc
RUN chmod +x /setup/cplex_odee1210.linux-x86-64.bin
RUN chmod +x cplex_installer_input.sh
RUN ./cplex_installer_input.sh | bash cplex_odee1210.linux-x86-64.bin
RUN echo 'export PATH=$PATH:/opt/ibm/ILOG/CPLEX_Optimizer1210/cplex/bin/x86-64_linux' >>/root/.bashrc \
&& /bin/bash -c "source ~/.bashrc"
ENV PATH $PATH:/opt/ibm/ILOG/CPLEX_Optimizer1210/cplex/bin/x86-64_linux
#: Download Anaconda
COPY Anaconda3-2019.10-Linux-x86_64.sh /setup/
RUN bash Anaconda3-2019.10-Linux-x86_64.sh -b -p /home/ec2-user/anaconda3
RUN echo 'export PATH=$PATH:/home/ec2-user/anaconda3/bin' >>/root/.bashrc \
&& /bin/bash -c "source ~/.bashrc"
ENV PATH $PATH:/home/ec2-user/anaconda3/bin
RUN conda install pandas -y \
&& conda install numpy -y \
&& conda install ujson -y \
&& pip install docplex \
&& pip install boto3 \
&& pip install grpcio \
&& pip install grpcio-tools
RUN python3 -m docplex.mp.environment
ADD fetch_and_run.sh /usr/local/bin/fetch_and_run.sh
ENTRYPOINT ["/usr/local/bin/fetch_and_run.sh"]
From there, I kick off a bash script
#!/bin/bash
date
echo "Args: $#"
env
echo "script_path: $1"
echo "script_name: $2"
echo "path_prefix: $3"
echo "jobID: $AWS_BATCH_JOB_ID"
echo "jobQueue: $AWS_BATCH_JQ_NAME"
echo "computeEnvironment: $AWS_BATCH_CE_NAME"
echo "current directory: $(pwd)"
mkdir /tmp/scripts/
aws s3 cp $1 /tmp/scripts/$2
python3 /tmp/scripts/${#:2}
But for some reason, I keep getting
/tmp/tmp.hQlWYBEFs/batch-file-temp: line 20: python3: command not found
Do I need to change some PATH variables? Why isn't Docker picking up my Python 3 version?
The image needs to have python3 installed. Building images works off of files and programs that exist in the container. The python3 you have installed on your own system is not available.

How can I make jenkins run "pip install"?

I have a git repo and would like to get jenkins to clone it then run
virtualenv venv --distribute
/bin/bash venv/source/activate
pip install -r requirements.txt
python tests.py
The console output from jenkins is:
+ virtualenv venv --distribute
New python executable in venv/bin/python
Installing distribute..........................done.
Installing pip...............done.
+ /bin/bash venv/bin/activate
+ pip install -r requirements.txt
Downloading/unpacking flask (from -r requirements.txt (line 1))
Running setup.py egg_info for package flask
SNIP
creating /usr/local/lib/python2.7/dist-packages/flask
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
----------------------------------------
Command /usr/bin/python -c "import setuptools;__file__='/var/lib/jenkins/workspace/infatics-website/build/flask/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-hkdBAi-record/install-record.txt failed with error code 1
Storing complete log in /home/jenkins/.pip/pip.log
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I've tried adding sudo before the command, but it's doesn't work either:
+ sudo pip install -r requirements.txt
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Any ideas how to get around this? Also when I run pip install -r requirement.txt in a terminal as the jenkins user it doesn't need sudo permission. Can I get jenkins (the process) to run as the jenkins user?
The fact that you have to run use sudo to run pip is a big warning that your virtual environment isn't working. The build output shows that pip is installing the requirements in your system site-packages directory, which is not the way virtualenv works.
Your build script doesn't actually preserve the activated virtual environment. The environment variables set by the activate script are set in a child bash process and are not propagated up to the build script. You should source the activate script instead of running a separate shell:
virtualenv venv --distribute
. venv/bin/activate
pip install -r requirements.txt
python tests.py
If you're running this as one build step, that should work (and install your packages in venv). If you want to add more steps, you'll need to set the PATH environment variable in your other steps. You're probably better off providing full paths to pip and python to ensure you're not dependent on system package installations.
Try using
stage('test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'virtualenv venv && . venv/bin/activate && pip install -r requirements.txt && python tests.py'
}
}
I totally agree what has been said.... But to make this more "Jenkins"
Create a basic Project - and in the custom steps do something like this
PROJECT="Tree"
rm -Rf ~/Builds/$PROJECT
CODE_HOME=~/Builds/$PROJECT/code
PYENV_HOME=~/Builds/$PROJECT/python
export PYENV_HOME
export PYTHONPATH=""
echo "Creating new Python env"
/usr/local/bin/python3 -m venv --clear $PYENV_HOME
source $PYENV_HOME/bin/activate
echo "Get Project"
mkdir -p $CODE_HOME
cd $CODE_HOME
git clone https://github.com/MyUsername/MyTree.git .
pip install --upgrade pip
pip install nose
pip install coverage
pip install -r requirements.txt
python setup.py build
python setup.py install
After this you can then do you nose tests etc...
Here is what I did to make Jenkins execute pip install requirements on Windows machine:
In "Execute Windows batch command" pipeline
REM activate venv, update pip and install package
cmd /k "cd <path to your directory like C:\WebAPI> & .\venv\Scripts\activate.bat & python -m pip install -U pip & pip install -r .\requirements.txt"
cmd /k executes windows command prompt and you can add any commands there. & is a pipeline operator. So you can have multiple commands

Resources