pyenv is used in "python" command.
But pipenv does'nt use pyenv with "pipenv shell" command when create virtual env.
Why is that happen?
pipenv does not use pyenv python, you have to decide which python you want to use with the --python option:
pipenv --python=3.10.6
If you already have the virtualenv with the wrong python, you have to remove it first before recreating it:
pipenv --rm
pipenv --python=3.10.6
pipenv install
Related
I'm trying to setup Pipenv on Ubuntu 22.04 LTS and I used:
sudo apt install pipenv
but I get an error:
FileNotFoundError: [Errno 2] No such file or directory: '/home/foo/.local/share/virtualenvs/hello-JDpq8NmY/bin/python'
I tried to update pip with:
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
Still no use.
I tried the solution suggested here and nothing changed.
The environment is there but the bin folder is missing.
I had the same problem. Remove pipenv from apt package manager and install pipenv with
pip install pipenv
After this you must set your PATH Variable to the pip dictionary.
A possible problem is related to pipenv not finding the python executable.
If this is your case is it possible to specify the path
pipenv install --python=/usr/bin/python3.10
Replace the path with desired python version
source
I can create my virtual environment dev1 successfully, but I can not activate it and switch into in during the Docker building.
All I want is to switch the venv and install my dependencies in requirements.txt.
My code:
WORKDIR /APP
ADD . /APP
ARG CONDA_VENV=dev1
RUN conda create -y --name ${CONDA_VENV} python=3.7
RUN conda activate ${CONDA_VENV}
RUN pip install -r requirements.txt
Error:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
The command '/bin/sh -c conda activate ${CONDA_VENV}' returned a non-zero code: 1
I am trying to create a virtual environment with the following command:
pipenv --three
But it doesn't work as the image shows:
What should I do?
I don't have virtual environment(s), like venv or virtualenv.
I had a similar problem and solved it in the following way
Uninstall PIP and pipenv, reinstall the version of python3. If not work, reinstall Python 3.7
sudo apt remove pip
sudo apt remove pipenv
sudo apt install python3-venv python3-pip
pip3 install pipenv
pipenv shell
Installing pip/setuptools/wheel with Linux Package Managers
I am deploying a docker solution for my application. In my docker file I used multiple conda-forge to build some containers. It worked very well for some of the containers, and give an error for the other and I am sure it is not about the package, because for the same package sometimes it work and others no.
I have tried to use pip instead of conda, but that lead to other errors since I am using conda originally for all of my configuration. Also, I read that RUN conda update --all will solve it, and for pip setup RUN pip install --upgrade setuptools
This is part of my docker file :
FROM dockerreg.cyanoptics.com/cyan/openjdk-java8:1.0.0
RUN conda update --all
RUN conda install -c conda-forge happybase=1.1.0 --yes
RUN conda install -c conda-forge requests-kerberos
RUN pip install --upgrade setuptools
RUN pip install --upgrade pip
RUN pip install kafka-python
RUN pip install requests-negotiate
The expected result is to build all containers successfully, but I am getting the following:
---> Using cache
---> 82f4cd49037d
Step 14 : RUN conda install -c conda-forge happybase=1.1.0 --yes
---> Using cache
---> c035b960aa3b
Step 15 : RUN conda install -c conda-forge requests-kerberos
---> Running in 54d869afcd00
Traceback (most recent call last):
File "/opt/conda/bin/conda", line 7, in <module>
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
The command '/bin/sh -c conda install -c conda-forge requests-
kerberos' returned a non-zero code: 1
make: *** [dockerimage] Error 1
Try combining the two conda install commands into a single command: RUN conda install -c conda-forge happybase=1.1.0 requests-kerberos --yes.
I ran into a similar issue with the install commands split up; it turns out the issue was that the first caused the python version to be upgraded, which in turn was incompatible with the conda install command - causing the error you're seeing.
Another workaround I found was to add python 3.6.8 as another install arg. One of the packages I was installing must have had a python 3.7 dependency, forcing it to upgrade python, and breaking conda install.
Actually the error indicates the wrong path for conda /bin/sh
Therefore adding the proper path to the Dockerfile will solve the issue as following :
ENV PATH /opt/conda/envs/env/bin:$PATH
A good reference to related topic is here, where it suggests to create a new virtual environment within the dockerfile:
https://medium.com/#chadlagore/conda-environments-with-docker-82cdc9d25754
I have a git repo and would like to get jenkins to clone it then run
virtualenv venv --distribute
/bin/bash venv/source/activate
pip install -r requirements.txt
python tests.py
The console output from jenkins is:
+ virtualenv venv --distribute
New python executable in venv/bin/python
Installing distribute..........................done.
Installing pip...............done.
+ /bin/bash venv/bin/activate
+ pip install -r requirements.txt
Downloading/unpacking flask (from -r requirements.txt (line 1))
Running setup.py egg_info for package flask
SNIP
creating /usr/local/lib/python2.7/dist-packages/flask
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
----------------------------------------
Command /usr/bin/python -c "import setuptools;__file__='/var/lib/jenkins/workspace/infatics-website/build/flask/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-hkdBAi-record/install-record.txt failed with error code 1
Storing complete log in /home/jenkins/.pip/pip.log
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I've tried adding sudo before the command, but it's doesn't work either:
+ sudo pip install -r requirements.txt
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Any ideas how to get around this? Also when I run pip install -r requirement.txt in a terminal as the jenkins user it doesn't need sudo permission. Can I get jenkins (the process) to run as the jenkins user?
The fact that you have to run use sudo to run pip is a big warning that your virtual environment isn't working. The build output shows that pip is installing the requirements in your system site-packages directory, which is not the way virtualenv works.
Your build script doesn't actually preserve the activated virtual environment. The environment variables set by the activate script are set in a child bash process and are not propagated up to the build script. You should source the activate script instead of running a separate shell:
virtualenv venv --distribute
. venv/bin/activate
pip install -r requirements.txt
python tests.py
If you're running this as one build step, that should work (and install your packages in venv). If you want to add more steps, you'll need to set the PATH environment variable in your other steps. You're probably better off providing full paths to pip and python to ensure you're not dependent on system package installations.
Try using
stage('test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'virtualenv venv && . venv/bin/activate && pip install -r requirements.txt && python tests.py'
}
}
I totally agree what has been said.... But to make this more "Jenkins"
Create a basic Project - and in the custom steps do something like this
PROJECT="Tree"
rm -Rf ~/Builds/$PROJECT
CODE_HOME=~/Builds/$PROJECT/code
PYENV_HOME=~/Builds/$PROJECT/python
export PYENV_HOME
export PYTHONPATH=""
echo "Creating new Python env"
/usr/local/bin/python3 -m venv --clear $PYENV_HOME
source $PYENV_HOME/bin/activate
echo "Get Project"
mkdir -p $CODE_HOME
cd $CODE_HOME
git clone https://github.com/MyUsername/MyTree.git .
pip install --upgrade pip
pip install nose
pip install coverage
pip install -r requirements.txt
python setup.py build
python setup.py install
After this you can then do you nose tests etc...
Here is what I did to make Jenkins execute pip install requirements on Windows machine:
In "Execute Windows batch command" pipeline
REM activate venv, update pip and install package
cmd /k "cd <path to your directory like C:\WebAPI> & .\venv\Scripts\activate.bat & python -m pip install -U pip & pip install -r .\requirements.txt"
cmd /k executes windows command prompt and you can add any commands there. & is a pipeline operator. So you can have multiple commands