Unable to create superuser in cvat - docker

I am able to build and run cvat tool. But when I trying to create a superuser then it is giving me below error.
ImportError: No module named 'gitdb.utils.compat'
I am running below command for creating a superuser.
docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser'
Does anyone have any idea or suggestion for the above problem?

It seems the newer version of gitdb does not work with cvat (default version is 4.0.2), you can follow Furkan Kirac answer but with gitdb version is 0.6.4:
# pip uninstall gitdb
# pip install gitdb==0.6.4

This problem is most probably due to a newer gitdb2 python package.
If cvat is already built as a docker container, for testing, you must log into the container as root, uninstall it and install an older gitdb.
docker exec -it -u root cvat bash
pip3 uninstall gitdb2
pip3 install gitdb
Then, running python script must work. If that is the case, then a persistent solution is to rebuild the containers.
You need to edit Dockerfile as below:
# Install git application dependencies
...
fi
RUN pip3 uninstall -y gitdb2
RUN pip3 install --no-cache-dir gitdb
Run "docker-compose build".
Hope this helps.

Related

Docker ssh-agent forwarding breaks when multiple repos installed via requirements.txt

My team is using docker build secrets to forward our ssh-agent during a docker image build. We are running Docker Desktop on Mac, version 20.10.22, and we use the following step to install all image dependencies:
RUN --mount=type=ssh,uid=50000 pip install --no-warn-script-location --user -r requirements.txt -c constraints.txt
Our requirements.txt has pip installing multiple repos via git+ssh, and we are having intermittent exchange identification errors depending on the number of repos included and the order of their installation:
ssh_exchange_identification: Connection closed by remote host
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
This same step runs successfully if we install the dependencies one-by-one:
RUN --mount=type=ssh,uid=50000 xargs -L 1 pip install --no-warn-script-location --user -c constraints.txt < requirements.txt
Installing one-by-one is not advised, because it does not allow the dependency resolver to work on all dependencies that might be common to the packages. We believe this issue may come from a break in docker ssh-agent forwarding when pip uses subprocess to install the entries of our requirements.txt.
Has anyone run into this issue or know of any workarounds? Thank you!

Installing python3.10 in ubuntu container

I am looking for some help in writing docker file for Ubuntu 18.04 version which installs Python3.10.
Currently it is written in such a way that it gets the default version of the Python3 (i.e. 3.6) along with the ubuntu 18.04.
Here the question is, is there any way that I can get the Python3.10 with Ubuntu 18.04? The requirement is to use either slim or non-slim versions of Python3.10 Bulls eye image from docker hub
you can use ubuntu 18 docker image, then install python 3.10 inside it.
FROM ubuntu:18.04
RUN apt-get -y update && apt -get install software-properties-common /
&& add-apt-repository ppa:deadsnakes/ppa && apt install python3.10
I am able to build the image on ubuntu 18.04 by including python3.10
Step-1: Write a docker file
FROM python:3.10-bullseye
RUN mkdir WORK_REPO
RUN cd WORK_REPO
WORKDIR /WORK_REPO
ADD hi.py .
CMD ["python", "-u", "hi.py"]
Step-2: Build the image
docker build -t image_name .
Step-3: Run the docker image
docker run image_name
Step-4: Connect to the container and check the Python version
I hope this would be helpful for someone who is completely new in writing dockerfile.
Many Thanks,
Suresh.

Pipenv: command not found in Jenkins

I am getting /var/lib/jenkins/workspace/<workspace name>#tmp/durable-9687b918/script.sh: line 1: pipenv: command not found while running a Jenkins pipeline.
It fails while running the following command:
pipenv install --dev
If I run the same command in the server where Jenkins is hosted it works fine. This started failing after I reinstalled Pipenv with below steps:
Uninstalled using: pip uninstall pipenv
Installed using: pip3 install pipenv, tried sudo -H pip3 install -U pipenv as well issue persist.
I had to switch to pip3 because I am using Python 3 now instead of 2.
check PATH, you might running python2.x and PIP module installed with pip3. so set your PATH accordingly.

Why dockered centos doesn't recognize pip?

I want to create a container with python and few packages over centos. I've tried to run several commands inside raw centos container. Everything worked fine I've installed everything I want. Then I created Dockerfile with the same commands executed via RUN and I'm getting /bin/sh: pip: command not found What could be wrong? I mean the situation at all. Why everything could be executed in the command line but not be executed with RUN? I've tried both variants:
RUN command
RUN command
RUN pip install ...
and
RUN command\
&& command\
&& pip install ...
Commands that I execute:
from centos
run yum install -y centos-release-scl\
&& yum install -y rh-python36\
&& scl enable rh-python36 bash\
&& pip install django
UPD: Full path to the pip helped. What's wrong?
You need to install pip first using
yum install python-pip
or if you need python3 (from epel)
yum install python36-pip
When not sure, ask yum:
yum whatprovides /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : #System
Matched from:
Filename : /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : updates
Matched from:
Filename : /usr/bin/pip
python2-pip-18.0-4.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : fedora
Matched from:
Filename : /usr/bin/pip
This output is from Fedora29, but you should get similar result in Centos/RHEL
UPDATE
From comment
But when I execute same commands from docker run -ti centos everything
is fine. What's the problem?
Maybe your PATH is broken somehow? Can you try full path to pip?
As it has already been mentioned by #rkosegi, it must be a PATH issue. The following seems to work:
FROM centos
ENV PATH /opt/rh/rh-python36/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN yum install -y centos-release-scl
RUN yum install -y rh-python36
RUN scl enable rh-python36 bash
RUN pip install django
I "found" the above PATH by starting a centos container and typing the commands one-by-one (since you've mentioned that it is working).
There is a nice explanation on this, in the slides of BMitch which can be found here: sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#24
Q: Why doesn't RUN work?
Why am I getting ./build.sh is not found?
RUN cd /app/srcRUN ./build.sh
The only part saved from a RUN is the filesystem (as a new layer).
Environment variables, launched daemons, and the shell state are all discarded with the temporary container when pid 1 exits.
Solution: merge multiple lines with &&:
RUN cd /app/src && ./build.sh
I know this was asked a while ago, but I just had this issue when building a Docker image, and wasn't able to find a good answer quickly, so I'll leave it here for posterity.
Adding the scl enable command wouldn't work for me in my Dockerfile, so I found that you can enable scl packages without the scl command by running:
source /opt/rh/<package-name>/enable.
If I remember correctly, you won't be able to do:
RUN source /opt/rh/<package-name>/enable
RUN pip install <package>
Because each RUN command creates a different layer, and shell sessions aren't preserved, so I just ran the commands together like this:
RUN source /opt/rh/rh-python36/enable && pip install <package>
I think the scl command has issues running in Dockerfiles because scl enable <package> bash will open a new shell inside your current one, rather than adding the package to the path in your current shell.
Edit:
Found that you can add packages to your current shell by running:
source scl_source enable <package>

Docker, Supervisord and supervisor-stdout

I'm trying to centralize output from supervisord and its processes using supervisor-stdout. But with this supervisord configuration:
#supervisord.conf
[supervisord]
nodaemon = true
[program:nginx]
command = /usr/sbin/nginx
stdout_events_enabled = true
stderr_events_enabled = true
[eventlistener:stdout]
command = supervisor_stdout
buffer_size = 100
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler
(Note that the config section of supervisor-stoud is exactly the same as the example on the supervisor-stoud site).
...and this Dockerfile:
#Dockerfile
FROM python:3-onbuild
RUN apt-get update && apt-get install -y nginx supervisor
# Setup supervisord
RUN pip install supervisor-stdout
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY nginx.conf /etc/nginx/nginx.conf
# restart nginx to load the config
RUN service nginx stop
# Start processes
CMD supervisord -c /etc/supervisor/conf.d/supervisord.conf -n
I can build the image just fine, but running a container from it gives me:
Error: supervisor_stdout:event_handler cannot be resolved within [eventlistener:stdout]
EDIT
The output from running:
supervisord -c /etc/supervisor/conf.d/supervisord.conf -n
is:
Error: supervisor_stdout:event_handler cannot be resolved within [eventlistener:stdout]
For help, use /usr/bin/supervisord -h
I had the same problem, in short, you need to install the Python package that provides that supervisor_stdout:event_handler handler. You should be able to by issuing the following commands:
apt-get install -y python-pip
pip install supervisor-stdout
If you have pip installed on that container, a simple:
pip install supervisor-stdout should suffice, more info about that specific package can be found here:
https://pypi.python.org/pypi/supervisor-stdout
AFAIK, there is no debian package that provides the supervisor-stdout, so the easiest method to install it is through pip.
Hope it helps whoever comes here as I did.
[Edit]
As Vin-G suggested, if you still have a problem even after going through these steps, supervisord might be stuck in an old version. Try updating it.
Cheers!
I had the exact same problem and solved it by using Ubuntu 14.04 as a base image instead of Debian Jessie (I was using python:2.7 image which is based on Jessie).
You can refer to this complete working implementation: https://github.com/rehabstudio/docker-gunicorn-nginx.
EDIT: as pointed out by #Vin-G in his comment, it might be because the supervisor version shipped with Debian Jessie is too old. You could try to remove it from apt and install it with pip instead.
Very similar to the above, but I don't think that there is a complete answer.
I had to remove from apt
apt-get remove supervisor
Then reinstall with pip, but with pip2 as the current version of supervisor doesn't support python 3
apt-get install -y python-pip
pip2 install supervisor
pip2 install supervisor-stdout
This all then worked.
Although the supervisord path is now
/usr/local/bin/supervisord
Hope that helps.
I used this hacky way to get it to work. It works in Debian Jessie as well.
I simply pasted the guy's file to one of my own in my project directory. Like /app/supervisord_stdout.py
I then added it to the conf like this. /app is the project directory for my project files in the container.
[eventlistener:stdout]
command = python supervisord_stdout.py
buffer_size = 100
events = PROCESS_LOG
directory=/app
result_handler=supervisord_stdout:event_handler
environment=PYTHONPATH=/app

Resources