Unable to upgrade pip in docker build - docker

In running the Docker build (using Jenkins CI), it fails on upgrading pip (last line of the docker file). I need it to upgrade version 8.1.1, as it suggest in the log, as my deploy fails on PIP versions mismatch.
Dockerfile
FROM ubuntu:14.04
FROM python:3.4
# Expose a port for gunicorn to listen on
EXPOSE 8002
# Make a workdir and virtualenv
WORKDIR /opt/documents_api
# Install everything else
ADD . /opt/documents_api
# Set some environment varialbes for PIP installation and db management
ENV CQLENG_ALLOW_SCHEMA_MANAGEMENT="True"
RUN apt-get update
RUN apt-get install -y python3-pip
RUN pip3 install --upgrade pip
Here's the error:
Step 15 : RUN pip3 install --upgrade pip
19:46:00 ---> Running in 84e2bcc850c0
19:46:04 Collecting pip
19:46:04 Downloading pip-8.1.1-py2.py3-none-any.whl (1.2MB)
19:46:04 Installing collected packages: pip
19:46:04 Found existing installation: pip 7.1.2
19:46:04 Uninstalling pip-7.1.2:
19:46:05 Successfully uninstalled pip-7.1.2
19:46:10 Exception:
19:46:10 Traceback (most recent call last):
19:46:10 File "/usr/local/lib/python3.4/shutil.py", line 424, in _rmtree_safe_fd
19:46:10 os.unlink(name, dir_fd=topfd)
19:46:10 FileNotFoundError: [Errno 2] No such file or directory: 'pip'
19:46:10 You are using pip version 7.1.2, however version 8.1.1 is available.

When you use two FROM directives, docker creates two output images, that's why it's messed up.
First, remove FROM ubuntu:14.04 and don't apt-get update in a Dockerfile, it's a bad practice (your image will be different every time you build, defeating the whole purpose of containers/Docker).
Second, you can check official python images Dockerfile to know which version of pip is installed, for example, python:3.4 (it's already v8.1.1).
Third, there is a special image for you case (external application): python:3.4-onbuild. Your Dockerfile can be reduced to:
FROM python:3.4-onbuild
ENV CQLENG_ALLOW_SCHEMA_MANAGEMENT="True"
EXPOSE 8002
CMD python myapp.py
One last thing, try to use alpine based images, they're much smaller (for python, it's almost 10 time smaller than the ubuntu based).

turns out the host I was running had no outside (internet) access. So the upgrade was failing. We solved it by adding another package to the DTR that had the necessary version in it.

use /usr/bin/ for run pip. Example :
/usr/bin/pip install --upgrade pip

running this command solved the same problem for me (python 3.9):
RUN /usr/local/bin/python -m pip install --upgrade pip

Related

How to pack and transport only the delta of a container?

I have the following scenario:
A docker or podman container is setup ready and deployed to several production instances, that are NOT connected to the internet.
A new release has been developed, that needs only a new package, like a python module of a few kilobytes in size.
The new package is installed on dev container, and the dockerfile has been updated to also load the latest module (just for documentation, because the target system cannot reach docker.io).
We have packed the new container release, which is more than a Gigabyte in size. And could transport the new container to the target environments.
My question is: is there a way, to pack, create and transport only a delta of the container compared to the previously deployed version?
podman version 3.4.7
echo "\
FROM jupyter/scipy-notebook
USER root
RUN apt-get update && apt-get install --no-install-recommends -y mupdf-tools python3-dev
USER user
RUN pip -V
RUN pip install fitz==0.0.1.dev2
RUN pip install PyMuPDF==1.20.2
RUN pip install seaborn
RUN pip install openpyxl==3.0.10
RUN pip install flask==2.1.3
" > sciPyDockerfile
podman build --tag python_runner -f ./sciPyDockerfile
sudo podman save -o python_runner.tar python_runner
gzip python_runner.tar
The result is a file
1.1G Nov 28 15:27 python_runner.tar.gz
Is there any way to pack the delta only?

pip unable to fetch wheels when inside container

When I try to build an image which includes installing Python modules via pip, the build container only use source distribution and therefore always compiles all modules from source, which is extremely annoying...
How can I get pip inside the container to use wheels just like the host system does?
When run from the host system
# pip3 -V
pip 18.1 from /usr/lib/python3/dist-packages/pip (python 3.7)
# pip3 install numpy
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting numpy
Using cached https://www.piwheels.org/simple/numpy/numpy-1.21.2-cp37-cp37m-linux_armv7l.whl
When run inside docker run -it python:3.7 /bin/bash
# pip3 -V
pip 21.2.4 from /usr/local/lib/python3.7/site-packages/pip (python 3.7)
# pip3 install numpy
Collecting numpy
Downloading numpy-1.21.2.zip (10.3 MB)
When I run everything in one command without shell, it works too
# docker run -it python:3.8 python3 -m pip install -U pip wheel setuptools && python3 -m pip install Pillow
Requirement already satisfied: pip in /usr/local/lib/python3.8/site-packages (21.2.4)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/site-packages (0.37.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.8/site-packages (57.4.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting Pillow
Downloading https://www.piwheels.org/simple/pillow/Pillow-8.3.1-cp37-cp37m-linux_armv7l.whl (1.3 MB)
Things I've tried
Run as privileged
use the pip flag --no-binary=:all:
System information:
Raspberry Pi 4
Linux 5.10.17-v7l+ armv7l GNU/Linux
Raspbian GNU/Linux 10 (buster)
ARMv7 Processor rev 3 (v7l)
Notice that you're using https://www.piwheels.org/, which only provides wheels for the Raspberry Pi. Open https://www.piwheels.org/simple/numpy/ you can see there are only armv6l and armv7l wheels.
Maybe in the container you forget to modify the index url and set it to https://www.piwheels.org/simple ?

dockerized conan shows FileExistsError: [Errno 17] File exists: './util-linux-2.33.1/tests/expected/libmount/context-X-mount.mkdir'

The full error is
ERROR: libmount/2.33.1: Error in source() method, line 26
tools.get(**self.conan_data["sources"][self.version])
FileExistsError: [Errno 17] File exists: './util-linux-2.33.1/tests/expected/libmount/context-X-mount.mkdir'
My setup is a dockerized conen where the container is built like this:
FROM gcc:10.2.0
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get install -y cmake
RUN apt-get install -y python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install conan
RUN conan remote add bincrafters https://api.bintray.com/conan/bincrafters/public-conan
CMD ["/bin/bash"]
My basepath contains the folders build/conan and there is a conanfile.txt in the basepath.
The conanfile.txt contains:
[requires]
sdl2/2.0.12#bincrafters/stable
The motivation to dockerize is so that I get to a stable buid environment over all my machines.
build/conan is extracted to store all cached files between builds, or so I hope it will once this works.
I made this into a repository so you can check out this example
EDIT: I modified the repo as I went on investigating - the original is in the commit history.
https://github.com/Aypahyo/dockerized-conan-shows-fileexistserror-errno-17-file-exists-util-linux-2.git
What I want is to use conan install from within a container on a mounted docker container with caching on the host machine.
My obvious question is: What is happening here and how do I fix it?
The issue seems to be stemming from the volume mount on my system.
I followed user uilianries advice and went for building a container based on an official conan-docker-tools container as well as moving the volume into a docker managed volume.
This error message is gone now although it looks like this approach in general may not fit what I want to do.
I modified the repository for this question with what I ended up with. https://github.com/Aypahyo/dockerized-conan-shows-fileexistserror-errno-17-file-exists-util-linux-2
caching does not work as I want it to but that is not what this question was about.

Unable to create superuser in cvat

I am able to build and run cvat tool. But when I trying to create a superuser then it is giving me below error.
ImportError: No module named 'gitdb.utils.compat'
I am running below command for creating a superuser.
docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser'
Does anyone have any idea or suggestion for the above problem?
It seems the newer version of gitdb does not work with cvat (default version is 4.0.2), you can follow Furkan Kirac answer but with gitdb version is 0.6.4:
# pip uninstall gitdb
# pip install gitdb==0.6.4
This problem is most probably due to a newer gitdb2 python package.
If cvat is already built as a docker container, for testing, you must log into the container as root, uninstall it and install an older gitdb.
docker exec -it -u root cvat bash
pip3 uninstall gitdb2
pip3 install gitdb
Then, running python script must work. If that is the case, then a persistent solution is to rebuild the containers.
You need to edit Dockerfile as below:
# Install git application dependencies
...
fi
RUN pip3 uninstall -y gitdb2
RUN pip3 install --no-cache-dir gitdb
Run "docker-compose build".
Hope this helps.

Why dockered centos doesn't recognize pip?

I want to create a container with python and few packages over centos. I've tried to run several commands inside raw centos container. Everything worked fine I've installed everything I want. Then I created Dockerfile with the same commands executed via RUN and I'm getting /bin/sh: pip: command not found What could be wrong? I mean the situation at all. Why everything could be executed in the command line but not be executed with RUN? I've tried both variants:
RUN command
RUN command
RUN pip install ...
and
RUN command\
&& command\
&& pip install ...
Commands that I execute:
from centos
run yum install -y centos-release-scl\
&& yum install -y rh-python36\
&& scl enable rh-python36 bash\
&& pip install django
UPD: Full path to the pip helped. What's wrong?
You need to install pip first using
yum install python-pip
or if you need python3 (from epel)
yum install python36-pip
When not sure, ask yum:
yum whatprovides /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : #System
Matched from:
Filename : /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : updates
Matched from:
Filename : /usr/bin/pip
python2-pip-18.0-4.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : fedora
Matched from:
Filename : /usr/bin/pip
This output is from Fedora29, but you should get similar result in Centos/RHEL
UPDATE
From comment
But when I execute same commands from docker run -ti centos everything
is fine. What's the problem?
Maybe your PATH is broken somehow? Can you try full path to pip?
As it has already been mentioned by #rkosegi, it must be a PATH issue. The following seems to work:
FROM centos
ENV PATH /opt/rh/rh-python36/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN yum install -y centos-release-scl
RUN yum install -y rh-python36
RUN scl enable rh-python36 bash
RUN pip install django
I "found" the above PATH by starting a centos container and typing the commands one-by-one (since you've mentioned that it is working).
There is a nice explanation on this, in the slides of BMitch which can be found here: sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#24
Q: Why doesn't RUN work?
Why am I getting ./build.sh is not found?
RUN cd /app/srcRUN ./build.sh
The only part saved from a RUN is the filesystem (as a new layer).
Environment variables, launched daemons, and the shell state are all discarded with the temporary container when pid 1 exits.
Solution: merge multiple lines with &&:
RUN cd /app/src && ./build.sh
I know this was asked a while ago, but I just had this issue when building a Docker image, and wasn't able to find a good answer quickly, so I'll leave it here for posterity.
Adding the scl enable command wouldn't work for me in my Dockerfile, so I found that you can enable scl packages without the scl command by running:
source /opt/rh/<package-name>/enable.
If I remember correctly, you won't be able to do:
RUN source /opt/rh/<package-name>/enable
RUN pip install <package>
Because each RUN command creates a different layer, and shell sessions aren't preserved, so I just ran the commands together like this:
RUN source /opt/rh/rh-python36/enable && pip install <package>
I think the scl command has issues running in Dockerfiles because scl enable <package> bash will open a new shell inside your current one, rather than adding the package to the path in your current shell.
Edit:
Found that you can add packages to your current shell by running:
source scl_source enable <package>

Resources