on macos "pipenv shell" => bash: update_terminal_cwd: command not found - pipenv

Trying to get pipenv working on macos 10.14.4.
$cat Pipfile
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
numpy = "==1.14.1"
[requires]
python_version = "3.6.8"
This works:
$pipenv --rm
Removing virtualenv (/Users/me/.local/share/virtualenvs/blah-zeMrhw5d)…
This works:
$pipenv install
Creating a virtualenv for this project…
Pipfile: /Users/me/mypath/Pipfile
Using /Users/me/.pyenv/versions/3.6.8/bin/python3 (3.6.8) to create virtualenv…
⠏ Creating virtual environment...Using base prefix '/Users/me/.pyenv/versions/3.6.8'
New python executable in /Users/me/.local/share/virtualenvs/blah-zeMrhw5d/bin/python3
Also creating executable in /Users/me/.local/share/virtualenvs/blah-zeMrhw5d/bin/python
Installing setuptools, pip, wheel...
done.
Running virtualenv with interpreter /Users/me/.pyenv/versions/3.6.8/bin/python3
✔ Successfully created virtual environment!
Virtualenv location: /Users/me/.local/share/virtualenvs/blah-zeMrhw5d
Pipfile.lock (7fd81f) out of date, updating to (89a067)…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
✔ Success!
Updated Pipfile.lock (7fd81f)!
Installing dependencies from Pipfile.lock (7fd81f)…
🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1/1 — 00:00:02
To activate this project's virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.
This does NOT work:
$pipenv shell
Launching subshell in virtual environment…
. /Users/alexryan/.local/share/virtualenvs/cabin_monitoring-zeMrhw5d/bin/activate
bash: update_terminal_cwd: command not found
$ . /Users/alexryan/.local/share/virtualenvs/cabin_monitoring-zeMrhw5d/bin/activate
bash: update_terminal_cwd: command not found
(cabin_monitoring) $
These errors get thrown every time I issue a command in this environment.
(blah) >ls -lF
total 12
-rw-r--r-- 1 alexryan staff 159 Apr 30 14:49 Pipfile
-rw-r--r-- 1 alexryan staff 2683 Apr 30 14:50 Pipfile.lock
...
bash: update_terminal_cwd: command not found
(blah) $
(cabin_monitoring) >
from https://apple.stackexchange.com/a/139808/91429 I see that update_terminal_cwd is defined /etc/bashrc
I can source /etc/bashrc to make this error go away, but doing so messes up my prompt so that it is no longer obvious that I am inside the virtual environment.
(blah) $source /etc/bashrc
hostname:blah me$
What is the best way to ensure that pipenv shell works correctly on macos?
UPDATE
I'm using pyenv to specify the version of python I wish to use because pipenv seems to require this.
I installed pyenv via curl https://pyenv.run | bash and added these lines to ~/.bashrc as requested:
# Load pyenv automatically by adding
# the following to ~/.bashrc:
export PATH="/Users/me/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
And made sure that ~/.bashrc was being called from ~/.bash_profile like so (because apparently pipenv shell is a nonlogin shell).
[[ -f ~/.bashrc ]] && source ~/.bashrc
pyenv is working fine with multiple versions of python installed.

Related

Why is anaconda3 still in `$PATH` after removing respective lines from `~/.bash_profile`?

I have uninstalled anaconda3 according to the documentation and this stackoverflow entry by:
Installing the cleaner
$ conda install anaconda-clean
Activating the 'base' virtual environment
$ source ~/opt/anaconda3/bin/activate
Running the cleaner
(base) $ anaconda-clean --yes
Deactivating the 'base' virtual environment
(base) $ conda deactivate
Removing the files
$ rm -rf ~/opt/anaconda3
$ rm -rf ~/opt/.anaconda_backup
Deleting all lines added by conda from environment file(s)
Opening my .bash_profile file (for others might be .profile and/or .bashrc)
In my case I deleted these lines:
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/me/opt/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/me/opt/anaconda3/etc/profile.d/conda.sh" ]; then
. "/home/me/opt/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/home/me/opt/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
I quit the iTerm and I have restarted the my MacOS Catalina 10.15.7 on the next day. But when running echo $PATH I still get:
/Users/me/opt/anaconda3/bin:/Library/Frameworks/Python.framework/Versions/3.9/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin
What did I do wrong?
I veryfied that I saved the changes to ~/.bash_profile:
$ cat ~/.bash_profile
# history size
export HISTFILESIZE=1000000
export HISTSIZE=1000000

Linux Ash Shell script to check if certain package is installed & called via docker

I'm trying to run docker on embedded Linux running OpenWRT.
Since the embedded Linux is a "resource constraint" I don't want Docker to install already installed packages, therefore I want to call a custom shell script with docker:
RUN $CMD_STRING = $(gcc)
RUN $CMD_OUTPUT=$(${CMD_STRING} -version)
RUN if [[ ${CMD_OUTPUT} == *"not found"* ]]; echo ${CMD_STRING} "was NOT FOUND, Installing..."
opkg update
opkg install gcc
fi
I will like a similar simple if/else structure.
I keep getting:
-ash: gcc: not found
-ash: -rw-r--r--: not found
I don't have some OpenWRT for test but this may work if its only an "ash" and "docker" problem. I tested it on alpine since it also have ash (from busybox).
Dockerfile:
from alpine:latest
RUN ash -c "if ! gcc 2>/dev/null; then echo 'not found..'; echo 'installing..'; fi"
Build it:
docker build .
Sending build context to Docker daemon 3.072kB
Step 1/2 : from alpine:latest
---> 389fef711851
Step 2/2 : RUN ash -c "if ! gcc 2>/dev/null; then echo 'not found..'; echo 'installing..'; fi"
---> Running in 2c47bee97dfc
not found...
installing..
Removing intermediate container 2c47bee97dfc
---> 35e698d1aea6
Successfully built 35e698d1aea6
You have extra spaces in your first command, and shouldn't be using a variable name with a dollar sign at the beginning. I think you probably also don't want to be assigning that with $(), since you haven't tested if it exists yet. Trying to run a command to see if it exists also isn't a great way to go about it. You can see if a program is installed like this:
if ! command -v gcc &> /dev/null; then
opkg install gcc
fi
(That's POSIX-compatible so should work in ash.)
You could also run opkg list-installed and check the output (see the docs) which may be useful for packages that aren't executables in your PATH.

wercker with docker switching user results in error, how to install nvm then?

Problem
My wercker build exits with Failed step: setup environment - Command exited with exit code: 1 when I'm switching user in my Docker image. I'm running wercker dev from the commandline. The Dockerfile builds fine with Docker itself on the commandline, as well as on Docker Hub. I can run it fine. It's just when I use it for wercker, that the error occurs.
For example in my Dockerfile is the following code:
# Adding user
RUN adduser --disabled-password --gecos '' dockworker && adduser dockworker sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /home/dockworker && chown -R dockworker:dockworker /home/dockworker
USER dockworker # Line the build seems to break on
When I comment this line out, it seems to pass. Now the problem with this, for me, is the following: I'd like to switch to another user, since I'm trying to install nvm (for gulp, bower). Generally I don't prefer to install this this as root, therefore I add a user for this.
Workaround?
However, when I do install nvm as root in my Dockerfile (so just removing the user related lines in the codeblock above completely):
ENV NODE_VERSION 0.12.7
ENV NVM_DIR /usr/local/nvm
# NVM
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.25.4/install.sh | NVM_DIR=/usr/local/nvm bash
#install the specified node version and set it as the default one, install the global npm packages
RUN . /usr/local/nvm/nvm.sh && nvm install $NODE_VERSION && nvm alias default $NODE_VERSION && npm install -g bower && npm install -g gulp
Then it does get past the setup environment stage, but during the steps it errors out that nvm and npm are not found. The step in the wercker.yml:
box:
id: francobolli/docker-ubuntu-14.04-php-5.6
tag: latest
env:
NVM_DIR: /usr/local/nvm
dev:
steps:
- script:
name: gulp styles and javascript
code: |
npm install
bower install --allow-root
gulp --env=production
I don't really understand this. When I run both docker images from the commandline (so with wercker removed from the context completely) I can execute nvm and npm just fine, but when I'm running it through wercker, it seems the .bashrc file is not being executed. When I cat ~/.bashrc during the steps, I can see:
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
Workaround!
When I enter this in a step, it will be executed and I can npm install without a problem, so it seems this is never executed through the .bashrc:
...
- script:
name: gulp styles and javascript
code: |
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # It works when I put it here, but it's also in ~/.bashrc, which doesn't seem to get executed
npm install
...
Note: If I source ~/.bashrc in the wercker step instead, it does not work.
Question
So my question is: What am I doing wrong, for not being able to switch user in the Wercker build and even if I could, would I have the same problem as running nvm with root: nvm and npm CAN be found when a Docker container is instantiated from the commandline, but CAN'T be found when running it with Wercker. What's the best solution?
I'd rather not add commands in the wercker.yml if it can be resolved through proper user configuration or proper nvm configuration. Sorry if I'm missing something very obvious.
This has nothing to do with Docker configuration, but with how Wercker handles Docker boxes. From the documentation:
Using Sudo
The sudo command is no longer supported in wercker v2 and effectively does nothing when used.
And for deployment:
Please note that if you update a project to make use of Docker (Ewok version) and this project has autodeployment, this deploy will most likely fail. We will update our documentation in the future on how to deploy these containers.
However, I did get it to build (and deploy) with the solution (temporary workaround?) as displayed in the original question.

How does one create a Python virtualenv in Jenkins?

I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
Which generates the following output:
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
The output of "which pip" seems to want to use the correct one:
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
My current working directory is what I expect it to be:
/var/lib/jenkins/jobs/Run Tests/workspace
But... wtf?
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I have been using python virtualenvs with Jenkins every day in the last two years, at multiple companies and for small side projects and cannot say I found "THE" answer. Still, I hope that sharing my experience will help others save time. Hopefully I will get further feedback in order to make the decision easier.
Avoid ShiningPanda - it's not well maintained, incompatible with Jenkins2 pipelines and prevents execution of jobs in parallel. Also it has the bad habit of leaving orphan environments on disk.
DIY via bash and virtualenv is my current favourite. Create it inside $WORKSPACE and, if not always cleaning, run relocatable before activating them. This is because jenkins workspace folder disk location can change between executions of job N and N+1.
If you use multiple builders that do need the same virtualenv, the easiest way is to dump your environment to a file and source it at the beginning of the new builder.
To ease the maintenance I am planning to investigate these:
direnvm
virtualenv-wrapper (mkvirtualenv)
pyenv
If you hit the shebang command line limits the best thing to do is to change your jenkins home directory to just /j.
Jenkins pipelines can be made to run with virtual environments but there are multiple things to consider.
The default shell that Jenkins uses is /bin/sh - this is configurable in Manage Jenkins -> Configure System -> Shell -> Shell executable. Setting this to /bin/bash will make source work.
An activated venv simply changes environment variables, and environment variables do not persist between stages in jenkins. See withEnv
If you are using version controlled multibranch pipelines jenkins creates a workspace with the branch name and a commit hash in the path - which can be quite long. venv scripts (e.g. pip) all start with a hashbang line which includes the full path to the python interpreter in the venv (the python interpreter itself is a symlink). E.g.,
~/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSUDYPJ34QR63ITGMC5VJNB56W6ID244AA/env/bin$ cat pip
#!/var/jenkins_home/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSUDYPJ34QR63ITGMC5VJNB56W6ID244AA/env/bin/python3.5
Bash only reads the first N characters of any executable file - which I found did not quite include the full venv path:
bash: ./pip: /var/jenkins_home/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSU: bad interpreter: No such file or directory
This particular problem can be avoided by executing the script with Python instead. E.g. python3.5 ./pip
I'd recommend avoiding ShiningPanda.
I set up my virtual environments with Anaconda/Miniconda. When installing conda make sure you're running as jenkins user.
your_user#$ sudo -u jenkins sh
jenkins#$ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
jenkins#$ bash Miniconda3-latest-Linux-x86_64.sh
Since Jenkins runs sh rather than bash, I added conda path to /etc/profile:
export PATH="/var/lib/jenkins/miniconda3/bin:$PATH"
Then in Jenkinsfile you can create and delete conda environments. Here's an example that creates a new environment for each build:
pipeline {
agent any
stages {
stage('Unit tests') {
steps {
sh '''
conda create --yes -n ${BUILD_TAG} python
source activate ${BUILD_TAG}
// example of unit test with nose2
pip install nose2
nose2
'''
}
}
}
post {
always {
sh 'conda remove --yes -n ${BUILD_TAG} --all'
}
}
}
I've got same problem. As I can see - you project named 'Run Tests'. So, this name contain space. That was the problem for me. I just renamed project, like RunTests - and venv working now! Attention - jenkins ask you about confirmation renaming project.
There are some issues with venv-python plugin with different OS environments.
Here is how I call python method manually. Not best practice but it work.
// Put this stage on top of pipeline
stage('Prepare venv') {
steps {
script {
if (isUnix()) {
env.ISUNIX = "TRUE" // cache isUnix() function to prevent blueocean show too many duplicate step (Checks if running on a Unix-like node) in python function below
sh 'python3 -m venv pyenv'
PYTHON_PATH = sh(script: 'echo ${WORKSPACE}/pyenv/bin/', returnStdout: true).trim()
}
else {
env.ISUNIX = "FALSE"
powershell(script:"py -3 -m venv pyenv") // windows not allow call python3.exe with venv. https://github.com/msys2/MINGW-packages/issues/5001
PYTHON_PATH = sh(script: 'echo ${WORKSPACE}/pyenv/Scripts/', returnStdout: true).trim()
}
try {
// Sometime agent with older pip version can cause error due to non compatible plugin.
Python("-m pip install --upgrade pip")
}
catch (ignore) { } // update pip always return false when already lastest version
// After this you can call Python() anywhere from pipeline
Python("-m pip install -r requirements.txt")
}
}
}
// Several plugins like WithPyenv is not working perfectly accross platform when using Virtual Env.
// Put this method outside pipeline
def Python(String command) {
if (env.ISUNIX == "TRUE") {
sh script:"source ${WORKSPACE}/pyenv/bin/activate && python ${command}", label: "python ${command}"
}
else {
powershell script:"${WORKSPACE}\\pyenv\\Scripts\\Activate.ps1 ; python ${command}", label: "python ${command}"
}
}
After acticating the virtualenv, try to run pip as a module:
python -m pip install ...
python -m pip vs pip
python -m pip: executes python interpreter binary that reads module pip.py from site packages directory
pip: executes pip binary / script picked up from $PATH
I have found that using python -m pip solved most of the pip permission problems encountered.
#hardbyte's answer
The default shell that Jenkins uses is /bin/sh - this is configurable in Manage Jenkins -> Configure System -> Shell -> Shell executable. Setting this to /bin/bash will make source work.
plus:
https://stackoverflow.com/a/70812248/1497139
pyvenv-3.4 returned non-zero exit status 1
got me working
sudo apt install python3.10-venv
and then in jenkins in the execute shell step:
python3.10 -m venv .venv
source .venv/bin/activate
...

In a Dockerfile, How to update PATH environment variable?

I have a dockerfile that download and builds GTK from source, but the following line is not updating my image's environment variable:
RUN PATH="/opt/gtk/bin:$PATH"
RUN export PATH
I read that that I should be using ENV to set environment values, but the following instruction doesn't seem to work either:
ENV PATH /opt/gtk/bin:$PATH
This is my entire Dockerfile:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y golang gcc make wget git libxml2-utils libwebkit2gtk-3.0-dev libcairo2 libcairo2-dev libcairo-gobject2 shared-mime-info libgdk-pixbuf2.0-* libglib2-* libatk1.0-* libpango1.0-* xserver-xorg xvfb
# Downloading GTKcd
RUN wget http://ftp.gnome.org/pub/gnome/sources/gtk+/3.12/gtk+-3.12.2.tar.xz
RUN tar xf gtk+-3.12.2.tar.xz
RUN cd gtk+-3.12.2
# Setting environment variables before running configure
RUN CPPFLAGS="-I/opt/gtk/include"
RUN LDFLAGS="-L/opt/gtk/lib"
RUN PKG_CONFIG_PATH="/opt/gtk/lib/pkgconfig"
RUN export CPPFLAGS LDFLAGS PKG_CONFIG_PATH
RUN ./configure --prefix=/opt/gtk
RUN make
RUN make install
# running ldconfig after make install so that the newly installed libraries are found.
RUN ldconfig
# Setting the LD_LIBRARY_PATH environment variable so the systems dynamic linker can find the newly installed libraries.
RUN LD_LIBRARY_PATH="/opt/gtk/lib"
# Updating PATH environment program so that utility binaries installed by the various libraries will be found.
RUN PATH="/opt/gtk/bin:$PATH"
RUN export LD_LIBRARY_PATH PATH
# Collecting garbage
RUN rm -rf gtk+-3.12.2.tar.xz
# creating go code root
RUN mkdir gocode
RUN mkdir gocode/src
RUN mkdir gocode/bin
RUN mkdir gocode/pkg
# Setting the GOROOT and GOPATH enviornment variables, any commands created are automatically added to PATH
RUN GOROOT=/usr/lib/go
RUN GOPATH=/root/gocode
RUN PATH=$GOPATH/bin:$PATH
RUN export GOROOT GOPATH PATH
You can use Environment Replacement in your Dockerfile as follows:
ENV PATH="${PATH}:/opt/gtk/bin"
Although the answer that Gunter posted was correct, it is not different than what I already had posted. The problem was not the ENV directive, but the subsequent instruction RUN export $PATH
There's no need to export the environment variables, once you have declared them via ENV in your Dockerfile.
As soon as the RUN export ... lines were removed, my image was built successfully
[I mentioned this in response to the selected answer, but it was suggested to make it more prominent as an answer of its own]
It should be noted that
ENV PATH="/opt/gtk/bin:${PATH}"
may not be the same as
ENV PATH="/opt/gtk/bin:$PATH"
The former, with curly brackets, might provide you with the host's PATH. The documentation doesn't suggest this would be the case, but I have observed that it is. This is simple to check just do RUN echo $PATH and compare it to RUN echo ${PATH}
This is discouraged (if you want to create/distribute a clean Docker image), since the PATH variable is set by /etc/profile script, the value can be overridden.
head /etc/profile:
if [ "`id -u`" -eq 0 ]; then
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
else
PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games"
fi
export PATH
At the end of the Dockerfile, you could add:
RUN echo "export PATH=$PATH" > /etc/environment
So PATH is set for all users.

Resources