How to install conan inside docker and use - docker

I am trying to use Conan by installing it in a Docker and using that docker. For the same, I did like
included these lines in Dockerfile
RUN apt-get install -y python3-pip
RUN sudo python3 -m pip install conan
And after starting docker container I have these lines in my CMakeFile.txt
conan_cmake_run(
REQUIRES
${CONAN_PACKAGES})
The ${CONAN_PACKAGES} is required to build my project. While running the cmakefile I'm getting this error
-- Conan: Automatic detection of conan settings from cmake
-- Conan: Settings= -s;build_type=Debug;-s;compiler=gcc;-s;compiler.version=8;-s;compiler.libcxx=libstdc++11
-- Conan: checking conan executable
-- Conan: Found program /usr/bin/conan
-- Conan: Version found
-- Conan executing: /usr/bin/conan install . -s build_type=Debug -s compiler=gcc -s compiler.version=8 -s compiler.libcxx=libstdc++11 -g=cmake
CMake Error at cmake/conan.cmake:402 (message):
Conan install failed='No such file or directory'
Call Stack (most recent call first):
cmake/conan.cmake:497 (conan_cmake_install)
CMakeLists.txt:17 (conan_cmake_run)
-- Configuring incomplete, errors occurred!
Addition of conan_remote is working fine.
But after creating the docker container executing these line inside docker fixed the problem
pip install conan
sudo ln -s ~/.local/bin/conan /usr/bin/conan
With my initial understanding of conan, I realized that it is looking for user level installation. But in docker everything is installed as root.
Can someone please help to fix this?
I'm using this version of conan: https://github.com/conan-io/cmake-conan/tree/release/0.15

Inside the dockerfile
instead of
RUN sudo python3 -m pip install conan
try this:
RUN pip install conan

Related

Pipenv: command not found in Jenkins

I am getting /var/lib/jenkins/workspace/<workspace name>#tmp/durable-9687b918/script.sh: line 1: pipenv: command not found while running a Jenkins pipeline.
It fails while running the following command:
pipenv install --dev
If I run the same command in the server where Jenkins is hosted it works fine. This started failing after I reinstalled Pipenv with below steps:
Uninstalled using: pip uninstall pipenv
Installed using: pip3 install pipenv, tried sudo -H pip3 install -U pipenv as well issue persist.
I had to switch to pip3 because I am using Python 3 now instead of 2.
check PATH, you might running python2.x and PIP module installed with pip3. so set your PATH accordingly.

Unable to create superuser in cvat

I am able to build and run cvat tool. But when I trying to create a superuser then it is giving me below error.
ImportError: No module named 'gitdb.utils.compat'
I am running below command for creating a superuser.
docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser'
Does anyone have any idea or suggestion for the above problem?
It seems the newer version of gitdb does not work with cvat (default version is 4.0.2), you can follow Furkan Kirac answer but with gitdb version is 0.6.4:
# pip uninstall gitdb
# pip install gitdb==0.6.4
This problem is most probably due to a newer gitdb2 python package.
If cvat is already built as a docker container, for testing, you must log into the container as root, uninstall it and install an older gitdb.
docker exec -it -u root cvat bash
pip3 uninstall gitdb2
pip3 install gitdb
Then, running python script must work. If that is the case, then a persistent solution is to rebuild the containers.
You need to edit Dockerfile as below:
# Install git application dependencies
...
fi
RUN pip3 uninstall -y gitdb2
RUN pip3 install --no-cache-dir gitdb
Run "docker-compose build".
Hope this helps.

Docker RUN fails but executing the same commands in the image succeeds

I'm working on a jetson tk1 deployment scheme where I use docker to create the root filesystem which then gets flashed onto the image.
The way this works is I create an armhf image using the nvidia provided sample filesystem with a qemu-arm-static binary which I can then build upon using standard docker tools.
I then have a "flasher" image which copies the contents of the file system, creates an iso image and flashes it onto my device.
The problem that I'm having is that I'm getting inconsistent results between installing apt packages using a docker RUN statement vs entering the image and installing apt packages.
IE:
# docker build -t jetsontk1:base .
Dockerfile
from jetsontk1:base1
RUN apt update
RUN apt install build-essential cmake
# or
RUN /bin/bash -c 'apt install build-essential cmake -y'
vs:
docker run -it jetsontk1:base1 /bin/bash
# apt update
# apt install build-essential cmake
When I install using the docker script I get the following error:
Processing triggers for man-db (2.6.7.1-1) ...
terminate called after throwing an instance of 'std::length_error'
what(): basic_string::append
qemu: uncaught target signal 6 (Aborted) - core dumped
Aborted (core dumped)
The command '/bin/sh -c /bin/bash -c 'apt install build-essential cmake -y'' returned a non-zero code: 134
I have no issues when manually installing applications from when I'm inside the container, but there's no point in using docker to manage this image building process if I can't do apt installs :/
The project can be found here: https://github.com/dtmoodie/tk1_builder
With the current state with the issue as I presented it at commit: 8e22c0d5ba58e9fdab38e675eed417d73ae0aad9

Why dockered centos doesn't recognize pip?

I want to create a container with python and few packages over centos. I've tried to run several commands inside raw centos container. Everything worked fine I've installed everything I want. Then I created Dockerfile with the same commands executed via RUN and I'm getting /bin/sh: pip: command not found What could be wrong? I mean the situation at all. Why everything could be executed in the command line but not be executed with RUN? I've tried both variants:
RUN command
RUN command
RUN pip install ...
and
RUN command\
&& command\
&& pip install ...
Commands that I execute:
from centos
run yum install -y centos-release-scl\
&& yum install -y rh-python36\
&& scl enable rh-python36 bash\
&& pip install django
UPD: Full path to the pip helped. What's wrong?
You need to install pip first using
yum install python-pip
or if you need python3 (from epel)
yum install python36-pip
When not sure, ask yum:
yum whatprovides /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : #System
Matched from:
Filename : /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : updates
Matched from:
Filename : /usr/bin/pip
python2-pip-18.0-4.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : fedora
Matched from:
Filename : /usr/bin/pip
This output is from Fedora29, but you should get similar result in Centos/RHEL
UPDATE
From comment
But when I execute same commands from docker run -ti centos everything
is fine. What's the problem?
Maybe your PATH is broken somehow? Can you try full path to pip?
As it has already been mentioned by #rkosegi, it must be a PATH issue. The following seems to work:
FROM centos
ENV PATH /opt/rh/rh-python36/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN yum install -y centos-release-scl
RUN yum install -y rh-python36
RUN scl enable rh-python36 bash
RUN pip install django
I "found" the above PATH by starting a centos container and typing the commands one-by-one (since you've mentioned that it is working).
There is a nice explanation on this, in the slides of BMitch which can be found here: sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#24
Q: Why doesn't RUN work?
Why am I getting ./build.sh is not found?
RUN cd /app/srcRUN ./build.sh
The only part saved from a RUN is the filesystem (as a new layer).
Environment variables, launched daemons, and the shell state are all discarded with the temporary container when pid 1 exits.
Solution: merge multiple lines with &&:
RUN cd /app/src && ./build.sh
I know this was asked a while ago, but I just had this issue when building a Docker image, and wasn't able to find a good answer quickly, so I'll leave it here for posterity.
Adding the scl enable command wouldn't work for me in my Dockerfile, so I found that you can enable scl packages without the scl command by running:
source /opt/rh/<package-name>/enable.
If I remember correctly, you won't be able to do:
RUN source /opt/rh/<package-name>/enable
RUN pip install <package>
Because each RUN command creates a different layer, and shell sessions aren't preserved, so I just ran the commands together like this:
RUN source /opt/rh/rh-python36/enable && pip install <package>
I think the scl command has issues running in Dockerfiles because scl enable <package> bash will open a new shell inside your current one, rather than adding the package to the path in your current shell.
Edit:
Found that you can add packages to your current shell by running:
source scl_source enable <package>

Unable to upgrade pip in docker build

In running the Docker build (using Jenkins CI), it fails on upgrading pip (last line of the docker file). I need it to upgrade version 8.1.1, as it suggest in the log, as my deploy fails on PIP versions mismatch.
Dockerfile
FROM ubuntu:14.04
FROM python:3.4
# Expose a port for gunicorn to listen on
EXPOSE 8002
# Make a workdir and virtualenv
WORKDIR /opt/documents_api
# Install everything else
ADD . /opt/documents_api
# Set some environment varialbes for PIP installation and db management
ENV CQLENG_ALLOW_SCHEMA_MANAGEMENT="True"
RUN apt-get update
RUN apt-get install -y python3-pip
RUN pip3 install --upgrade pip
Here's the error:
Step 15 : RUN pip3 install --upgrade pip
19:46:00 ---> Running in 84e2bcc850c0
19:46:04 Collecting pip
19:46:04 Downloading pip-8.1.1-py2.py3-none-any.whl (1.2MB)
19:46:04 Installing collected packages: pip
19:46:04 Found existing installation: pip 7.1.2
19:46:04 Uninstalling pip-7.1.2:
19:46:05 Successfully uninstalled pip-7.1.2
19:46:10 Exception:
19:46:10 Traceback (most recent call last):
19:46:10 File "/usr/local/lib/python3.4/shutil.py", line 424, in _rmtree_safe_fd
19:46:10 os.unlink(name, dir_fd=topfd)
19:46:10 FileNotFoundError: [Errno 2] No such file or directory: 'pip'
19:46:10 You are using pip version 7.1.2, however version 8.1.1 is available.
When you use two FROM directives, docker creates two output images, that's why it's messed up.
First, remove FROM ubuntu:14.04 and don't apt-get update in a Dockerfile, it's a bad practice (your image will be different every time you build, defeating the whole purpose of containers/Docker).
Second, you can check official python images Dockerfile to know which version of pip is installed, for example, python:3.4 (it's already v8.1.1).
Third, there is a special image for you case (external application): python:3.4-onbuild. Your Dockerfile can be reduced to:
FROM python:3.4-onbuild
ENV CQLENG_ALLOW_SCHEMA_MANAGEMENT="True"
EXPOSE 8002
CMD python myapp.py
One last thing, try to use alpine based images, they're much smaller (for python, it's almost 10 time smaller than the ubuntu based).
turns out the host I was running had no outside (internet) access. So the upgrade was failing. We solved it by adding another package to the DTR that had the necessary version in it.
use /usr/bin/ for run pip. Example :
/usr/bin/pip install --upgrade pip
running this command solved the same problem for me (python 3.9):
RUN /usr/local/bin/python -m pip install --upgrade pip

Resources