Pipenv always finds itself in a virtualenv - pipenv

My Pipenv seems to always think it is in a virtual environment, even when it clearly is not:
$ cd /tmp/
/tmp $ mkdir hello
/tmp $ cd hello/
/t/hello $ pipenv install --python 3.8 --dev pylint
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.
How can I fix this?

Related

Create docker image with specified CUDA toolkits and pytorch

I'm using clusters of my corporation by ICM. It provides a convenient way to configure the remote by docker:
So, I want to build a docker image of my developing environment (python packages, cuda, some utility scripts like screen and rsync, also some necessary data) to deploy on the remote machine. Here is my Dockerfile:
# syntax=docker/dockerfile:1
FROM pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime
WORKDIR /app
RUN sudo apt-get install rsync
RUN sudo apt-get install screen
RUN conda create --prefix /data/vxxx/nn python=3.8
RUN conda init
# shotcut for activating my environment
RUN echo 'alias nn="conda activate /data/xxx/nn"' >> ~/.bashrc
RUN source ~/.bashrc
RUN nn
RUN pip3 install torchtext==0.8.1 pandas scipy scikit-learn transformers tensroboard -f https://download.pytorch.org/whl/torch_stable.html
# copy files from windows
COPY /mnt/c/test .
RUN cd /data
RUN mkdir xxx
RUN cd xxx
RUN mkdir Data
RUN mkdir Code
RUN cd Code
RUN git clone https://github.com/namespace-Pt/Document-Reduction.git
RUN git config --global user.name 'xxx'
RUN git config --global user.email 'xxx#1.com'
CMD [ "sleep", "infinity"]
I'm new to docker and I followed the official python image tutorial, I have the following questions:
what is WORKDIR, does it mean to create a new directory where all files will be stored?
why my COPY command is not working?
how to publish my image to make it usable for the cluster?
Beginner here, from what i understood.
WORKDIR Is work directory for other commands as RUN, CMD, COPY or ADD.
Don't know. I will double check directories paths.
Don't know. Normally i'am using dockerhub

Unable to create superuser in cvat

I am able to build and run cvat tool. But when I trying to create a superuser then it is giving me below error.
ImportError: No module named 'gitdb.utils.compat'
I am running below command for creating a superuser.
docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser'
Does anyone have any idea or suggestion for the above problem?
It seems the newer version of gitdb does not work with cvat (default version is 4.0.2), you can follow Furkan Kirac answer but with gitdb version is 0.6.4:
# pip uninstall gitdb
# pip install gitdb==0.6.4
This problem is most probably due to a newer gitdb2 python package.
If cvat is already built as a docker container, for testing, you must log into the container as root, uninstall it and install an older gitdb.
docker exec -it -u root cvat bash
pip3 uninstall gitdb2
pip3 install gitdb
Then, running python script must work. If that is the case, then a persistent solution is to rebuild the containers.
You need to edit Dockerfile as below:
# Install git application dependencies
...
fi
RUN pip3 uninstall -y gitdb2
RUN pip3 install --no-cache-dir gitdb
Run "docker-compose build".
Hope this helps.

Why dockered centos doesn't recognize pip?

I want to create a container with python and few packages over centos. I've tried to run several commands inside raw centos container. Everything worked fine I've installed everything I want. Then I created Dockerfile with the same commands executed via RUN and I'm getting /bin/sh: pip: command not found What could be wrong? I mean the situation at all. Why everything could be executed in the command line but not be executed with RUN? I've tried both variants:
RUN command
RUN command
RUN pip install ...
and
RUN command\
&& command\
&& pip install ...
Commands that I execute:
from centos
run yum install -y centos-release-scl\
&& yum install -y rh-python36\
&& scl enable rh-python36 bash\
&& pip install django
UPD: Full path to the pip helped. What's wrong?
You need to install pip first using
yum install python-pip
or if you need python3 (from epel)
yum install python36-pip
When not sure, ask yum:
yum whatprovides /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : #System
Matched from:
Filename : /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : updates
Matched from:
Filename : /usr/bin/pip
python2-pip-18.0-4.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : fedora
Matched from:
Filename : /usr/bin/pip
This output is from Fedora29, but you should get similar result in Centos/RHEL
UPDATE
From comment
But when I execute same commands from docker run -ti centos everything
is fine. What's the problem?
Maybe your PATH is broken somehow? Can you try full path to pip?
As it has already been mentioned by #rkosegi, it must be a PATH issue. The following seems to work:
FROM centos
ENV PATH /opt/rh/rh-python36/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN yum install -y centos-release-scl
RUN yum install -y rh-python36
RUN scl enable rh-python36 bash
RUN pip install django
I "found" the above PATH by starting a centos container and typing the commands one-by-one (since you've mentioned that it is working).
There is a nice explanation on this, in the slides of BMitch which can be found here: sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#24
Q: Why doesn't RUN work?
Why am I getting ./build.sh is not found?
RUN cd /app/srcRUN ./build.sh
The only part saved from a RUN is the filesystem (as a new layer).
Environment variables, launched daemons, and the shell state are all discarded with the temporary container when pid 1 exits.
Solution: merge multiple lines with &&:
RUN cd /app/src && ./build.sh
I know this was asked a while ago, but I just had this issue when building a Docker image, and wasn't able to find a good answer quickly, so I'll leave it here for posterity.
Adding the scl enable command wouldn't work for me in my Dockerfile, so I found that you can enable scl packages without the scl command by running:
source /opt/rh/<package-name>/enable.
If I remember correctly, you won't be able to do:
RUN source /opt/rh/<package-name>/enable
RUN pip install <package>
Because each RUN command creates a different layer, and shell sessions aren't preserved, so I just ran the commands together like this:
RUN source /opt/rh/rh-python36/enable && pip install <package>
I think the scl command has issues running in Dockerfiles because scl enable <package> bash will open a new shell inside your current one, rather than adding the package to the path in your current shell.
Edit:
Found that you can add packages to your current shell by running:
source scl_source enable <package>

Cleanly remove NPM-installed executable

I have an executable that's installed with npm globally:
npm install -g r2g
I uninstall it:
npm uninstall -g r2g
but a phantom executable still exists, if I run r2g.
However, when I run $(which r2g) it's empty. So maybe it's in the bash hash?
When I run:
hash -p r2g
I get something strange:
$ hash -p r2g
hits command
3 /Users/alexamil/.nvm/versions/node/v10.1.0/bin/npm
4 /bin/rm
how can I completely remove an executable installed globally with NPM?
This isn't what you wanted, but you could use a multistage build, then you won't have to remove build dependencies https://docs.docker.com/develop/develop-images/multistage-build/. So ideally you'd install r2g in the first stage, use it, then move on to the next stage where you would only install what's needed to run your application.
When you run:
$ npm uninstall -g r2g
The module will be removed but not the dependencies.
Remove it globally by running:
$ npm -g uninstall r2g --save

How to remove a virtualenv created by "pipenv run"

I am learning Python virtual environment. In one of my small projects I ran
pipenv run python myproject.py
and it created a virtualenv for me in C:\Users\USERNAME\.virtualenvs
I found it also created or modified some files under my project source code directory. I am just wondering how to cleanly delete this virtualenv and reverse my project back to a no-virtualenv state.
I am using python 3.6.4, and PyCharm.
You can run the pipenv command with the --rm option as in:
pipenv --rm
This will remove the virtualenv created for you under ~/.virtualenvs
See https://pipenv.kennethreitz.org/en/latest/cli/#cmdoption-pipenv-rm
I know that question is a bit old but
In root of project where Pipfile is located you could run
pipenv --venv
which returns
Linux/OS X:
/Users/your_user_name/.local/share/virtualenvs/model-N-S4uBGU
Windows:
C:\Users\your_user_name\.virtualenvs\model-N-S4uBGU
and then remove this env by typing
Bash/Zsh:
rm -rf /Users/your_user_name/.local/share/virtualenvs/model-N-S4uBGU
Powershell:
Remove-Item -Recurse -Force 'C:\Users\your_user_name\.virtualenvs\model-N-S4uBGU'
Command Prompt
rmdir /s "C:\Users\your_user_name\.virtualenvs\model-N-S4uBGU"

Resources