I am new to pipenv so there might be something I'm not understanding here. However it seems like the virtual environment which is created depends on the current directory, which seems bad to me.
Here is what I did:
Checked out code from Github which already had Pipfile and Pipfile.lock
Did some unrelated stuff... at this point I was in a directory called /home/user/me/miniconda3/bin/
Ran /home/user/me/miniconda3/bin/pipenv run python /home/user/me/my-script-dir/my-script.py
This caused Pipenv to create a virtual environment. Output:
Creating a virtualenv for this project...
Using /home/user/me/miniconda3/bin/python (3.6.4) to create virtualenv…
Already using interpreter /home/user/me/miniconda3/bin/python
Using base prefix '/home/user/me/miniconda3'
New python executable in /home/user/me/.local/share/virtualenvs/bin-YnM8YhRk/bin/python
Installing setuptools, pip, wheel...done.
Virtualenv location: /home/user/me/.local/share/virtualenvs/bin-YnM8YhRk
Creating a Pipfile for this project…
Then I realized that I needed to run pipenv install so this time I cd'd to the directory where the script is actually stored, /home/user/me/my-script-dir/, and ran /home/user/me/miniconda3/bin/pipenv install. Then I got this output:
Creating a virtualenv for this project…
Using /home/user/me/miniconda3/bin/python (3.6.4) to create virtualenv…
Already using interpreter /home/user/me/miniconda3/bin/python
Using base prefix '/home/user/me/miniconda3'
New python executable in /home/user/me/.local/share/virtualenvs/my-script-dir-Ex37BY7g/bin/python
Installing setuptools, pip, wheel...done.
Virtualenv location: /home/user/me/.local/share/virtualenvs/my-script-dir-Ex37BY7g
Installing dependencies from Pipfile.lock (6c24e4)…
So as you can see I actually was running the same script each time, but somehow it created two different virtual environments. And the virtual environments are named after what happened to be my current directory at the time, not the directory of the script. This seems like it would be very unwieldy unless I am missing something.
You are correct, the virtualenv Pipenv uses does depend on the current directory.
Related
I ran pipenv install to create a Pipfile in the current directory that doesn't have a Pipfile. It gave the following output but did not create a Pipfile. Why not?
Installing dependencies from Pipfile.lock (639627)…
🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0/0 — 00:00:00
To activate this project's virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.
It looks like it found a Pipfile.lock somewhere and used it? (similar to git behavior)
Use the PIPENV_NO_INHERIT environment variable to ignore inheriting from directories above the current directory, e.g.,
PIPENV_NO_INHERIT=True pipenv install
In your case, pipenv searched directories above the current directory and found a Pipfile there that it used (the location of which can be seen with pipenv --where).
(Incidently, I looked at the pipenv documentation but was unable to find where it discussed this behavior, so please add a link here to that documentation if you find it.)
I ran pipenv shell first (creat Pipfile), then pipenv install (creat Pipfile.lock)
'pip install pipenv' command returns "Requirement already satisfied ..."
Is it possible to tell pipenv where the venv is located? Perhaps there's something you can put in the pipfile, or something for the .env file?
I fairly frequently have to recreate my venv because pipenv seemingly loses track of where it is.
For example, I started a project using Pycharm to configure the file system and create my pipenv interpreter. It created the venv in ~/.local/share/virtualenvs/my-project-ZbEWMGNA and it was able to keep track of where that interpreter was located.
Switching to a terminal window & running pipenv commands then resulted in;
Warning: No virtualenv has been created for this project yet! Consider running pipenv install first to automatically generate one for you or seepipenv install --help for further instructions.
At which point I ran pipenv install from the terminal & pointed pycharm at that venv, so the path would become ~/apps/my-project-ZbEWMGNA (which sits alongside the project files ~/apps/my-project)
Now I've got venvs in both paths and pipenv still can't find them.
mwalker#Mac my-project % pipenv --where
/Users/mwalker/apps/my-project
mwalker#Mac my-project % pipenv --venv
No virtualenv has been created for this project yet!
Aborted!
mwalker#Mac my-project % ls ~/apps
my-project
my-project-ZbEWMGNA
mwalker#Mac my-project % ls ~/.local/share/virtualenvs
my-project-ZbEWMGNA
Yes, it is possible by setting environment variables. You can set a path for virtual environments via the WORKON_HOME. Or have the virtual environment created in the project with PIPENV_VENV_IN_PROJECT.
Pipenv automatically honors the WORKON_HOME environment variable, if you have it set — so you can tell pipenv to store your virtual environments wherever you want
-- https://pipenv-fork.readthedocs.io/en/latest/advanced.html#custom-virtual-environment-location
or
PIPENV_VENV_IN_PROJECT
If set, creates .venv in your project directory.
-- https://pipenv-fork.readthedocs.io/en/latest/advanced.html#pipenv.environments.PIPENV_VENV_IN_PROJECT
In my experience, PyCharm will uses the existing venv created by Pipenv. Otherwise it will create it in the directory that PyCharm is configured to create it.
When ever we try to run the command pip intall nltk or pip install numpy we get error that pip is not recognized as internal or external command then we add pip to the path. I want to know that what is path and why we add link in path. Any one help please.
From the Linux Information Project:
PATH is an environmental variable in Linux and other Unix-like operating systems that tells the shell which directories to search for executable files (i.e., ready-to-run programs) in response to commands issued by a user. It increases both the convenience and the safety of such operating systems and is widely considered to be the single most important environmental variable.
So basically it's a list of directories in which the shell looks to find commands.
Let's say your pip is installed at /usr/local/bin/pip, and /usr/local/bin/ is not in your PATH variable, the shell won't be able to find pip.
If you're using Python virtual environment, like python3 -m venv my-venv, you usually have to source bin/activate under my-venv, which adds all scripts under my-venv/bin to your PATH variable for the current shell. Then your shell will be able to find the virtual environment-specific scripts.
Since PATH is set by the login shell, when you close the current shell and open a new one, the variable gets reset. Then you have to call source bin/activate under my-venv again to get shell look into your virtual environment.
I am trying to setup docker-compose architecture for local development and production and I can't figure when in the containers life it's the best time to install library dependencies. In the same time I am not sure if these should be placed in the container or in external volume.
All my code is mounted in external volumes, so that changes are immidiately taken into without rebuilding the containers, but I am not sure about libraries that need to be installed by pip (I am running python backend) and npm/yarn (for webpack front-end).
Placing requirments.txt and package.json into the containers and running pip install and yarn install in the container build process means that I have to rebuild the container any time dependecies change - that is too much overhead.
Putting them in an external volume and running pip install and yarn install as part of the command of each container when it is started seems to solve the issue.
The build process of each container then contains only platform dependencies (eg. installing python, webpack or other platform tools), but libraries are installed after started (with CMD directive).
Is this the correct approach? I have seen lot of examples doing exactly the oposite and running npm install in the build process of the container - but I don't see any advantage for that, am I missing something?
Installing dependecies is usually part of the build process. Mounting code is a good trick when developing in order to get changes directly reflected.
Concerning adding requirements.txt or package.json. Installing dependecies takes time, and for that you need to take advantage of docker layer caching. In particular, you want to avoid cache invalidation.
For pip I suggest the following in development phase: For dependencies that you are unlikely to change, install these in separate RUN instuction. Your Docker file will look something like.
FROM ..
RUN pip install package1 package2 package3 ...
ADD requirements.txt requirements.txt
RUN RUN pip install -r requirements.txt
...
Keep only dependencies that might be changed in requirements.txt. Once you are done developing, add the packages back to the requirements.txt and build using the requirements file.
A similar approach would be adding two requirements files, and at the end combining them.
I've just installed phpcpd globally via following command:
sudo composer global require 'sebastian/phpcpd=
my ~/composer/vendor/bin/ directory is in my $PATH variable too.
Now when I try to run phpcpd I get following error:
You need to set up the project dependencies using the following commands:
wget http://getcomposer.org/composer.phar
php composer.phar install
Any idea what I'm doing wrong here?
Thanks.
The point Sebastian didn't mention in the installation instructions is that by using Composer to globally install PHPCPD, you don't get it's dependencies installed, only the direct code. You have to go to the PHPCPD directory in the global vendor directory (i.e. the PHPCPD main folder in there, something like ...somepath/.composer/vendor/sebastian/phpcpd/) and run composer install there.
The easier way would be to just install the .phar file, but I understand this has different issues.