Issue facing while dockerization of person tracking - docker

I've build an application which is detecting and tracking region of interest. It workers absolutely fine in my local machine, but when I am dockerazing the app I get this error:
"qt.qpa.plugin: Could not find the Qt platform plugin "xcb" in "" " on opencv version 4.5.5.64.
When I downgrade to opencv version 4.1.2.30, it then starts working( Note I want to work on Opencv 4.5.5.64 for tracker related issues).
I am running docker image like: sudo docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix --device /dev/video0 --gpus all --ipc=host myImage.
Its working with older version of opencv but not with 4.5.5.64. I've been banging my head on this issue since last 4 days now, I would really appreciate if I can get some help
FROM nvcr.io/nvidia/pytorch:22.04-py3
RUN rm -rf /opt/pytorch
RUN apt-get update && apt-get autoclean
RUN apt-get update && apt-get install -y --no-install-recommends python3-pip
RUN pip3 install pyqt5
RUN apt-get install -y '^libxcb.*-dev' libx11-xcb-dev libglu1-mesa-dev libxrender-dev libxi-dev libxkbcommon-dev libxkbcommon-x11-dev
RUN apt-get install -y libsm6 libxrender1 libfontconfig1
ENV QT_DEBUG_PLUGINS=1
COPY requirements.txt .
RUN python -m pip install --upgrade pip
RUN pip uninstall -y torch torchvision torchtext Pillow
RUN pip install --no-cache -r requirements.txt albumentations wandb gsutil notebook Pillow>=9.1.0 \
torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
RUN pip install opencv-contrib-python==4.5.5.64
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
ENV OMP_NUM_THREADS=8
below is the dockerfile content.

Related

Copy file out of docker image during docker build [duplicate]

This question already has answers here:
How to copy files from dockerfile to host?
(6 answers)
Closed 3 months ago.
I have a simply Dockerfile to build python requirements into a zip file to be uploaded to AWS Lambda.
FROM amazonlinux:2.0.20221004.0
RUN yum install -y python37 && \
yum install -y python3-pip && \
yum install -y zip && \
yum clean all
RUN python3.7 -m pip install --upgrade pip && \
python3.7 -m pip install virtualenv
COPY aws_requirements.txt .
RUN pip install -r aws_requirements.txt -t ./python
RUN zip -r python.zip ./py
Is there a way to copy the python.zip out of the image to the host during the dockerfile?
With buildkit, you can output the result to a directory on the host instead of pushing to a registry. So you can put the zip file in a scratch image and then output that:
FROM amazonlinux:2.0.20221004.0 as build
RUN yum install -y python37 && \
yum install -y python3-pip && \
yum install -y zip && \
yum clean all
RUN python3.7 -m pip install --upgrade pip && \
python3.7 -m pip install virtualenv
COPY aws_requirements.txt .
RUN pip install -r aws_requirements.txt -t ./python
RUN zip -r /python.zip ./py
FROM scratch as artifact
COPY --from=build /python.zip /python.zip
Then running:
docker build --output type=local,dest=out .
Will create a out/python.zip file.

Install ImageMagick to docker

I'm running a python script to manipulate pictures. I was using an ImageMagick binding for python called wand.
Here is the Dockerfile.
FROM ubuntu:latest
RUN apt update
RUN apt install python3 -y
RUN apt-get -y install python3-pip
RUN pip install pillow
RUN pip install wand
WORKDIR /usr/app/src
COPY image_converter.py ./
COPY test_image_converter.py ./
RUN python3 -m unittest test_image_converter.py
And this is the error message
ImportError: MagickWand shared library not found.
You probably had not installed ImageMagick library.
Try to install:
https://docs.wand-py.org/en/latest/guide/install.html
Alright I got it thanks to this one Docker unantended installation of imagemagick
RUN DEBIAN_FRONTEND="noninteractive" apt-get install libmagickwand-dev --no-install-recommends -y

Docker image size is coming up to 1.7 G for Ubuntu with Python packages

Following is my Dockerfile :-
FROM ubuntu:18.04 AS builder
RUN apt update -y
RUN apt install python3.8 -y && apt install python3-pip -y
RUN apt install build-essential automake pkg-config libtool libffi-dev libgmp-dev -y
RUN apt install libsecp256k1-dev -y
RUN apt install openjdk-8-jre -y
RUN apt install git -y
RUN apt install libkrb5-dev -y
RUN apt install vim -y
RUN mkdir /opt/app
RUN chown -R root:root /opt/app
COPY ["requirements.txt","/opt/app/requirements.txt"]
SHELL ["/bin/bash", "-c"]
WORKDIR /opt/app
RUN pip3 install -r requirements.txt && apt-get -y clean all
RUN mkdir /opt/app/
RUN chown -R root:root /opt/app/
RUN cd /opt/app/
RUN git clone -b master https://bitbucket.org/heroes/test.git
CMD ["bash","/opt/app/bin/connect.sh"]
Docker image is generating with an image file size of 1.7G. I need to have OpenJDK hence cannot use a standard python package as a base package. When I perform docker history , I can see 2 or 3 layers (installing packages above like Python3.8, OpenJDK and libsecp256k1-dev) taking up to 400MB to 500MB in size. Ubuntu as a base image takes only 64 MB however rest of size is taking by my dockerfile layers.
I believe I need to re-write the dockerfile in order to reduce the file size which I did but nothing happened concrete.
Please assist me on reducing the image less than 1 GB at least.
[Update]
Below is my updated Dockerfile:-
FROM ubuntu:18.04 AS builder
WORKDIR /opt/app
COPY requirements.txt /opt/app/aws/requirements.txt
RUN mkdir -p /opt/app/aws \
&& apt-get update -yq \
&& apt-get install -y python3.8 python3-pip openjdk-8-jre -yq && apt-get -y clean all \
&& chown -R root:root /opt/app && cd /opt/app/aws && pip3 install -r requirements.txt
FROM alpine
COPY --from=builder /opt/app /opt/app
SHELL ["/bin/bash", "-c"]
CMD ["bash","/opt/app/aws/bin/connector/connect.sh"]
Screenshot of image size:-
After removing unwanted libraries like git, etc and using the multi-stage build, the image is now approx 1.7 GB which I believe is a lot. Any suggestion to improve this?
You have multiple issues going on.
First, each of your RUN apt install is increasing your image size, you should have them all in the same RUN stage, and at the end of the stage, delete all cached apt files.
Second, you're installing unnecessary stuff. Why would you need vim and git for instance? Why are you installing build-essential and other build-related stuff if you're not building anything?
Third, it seems you tried to do a multi-stage build but ended up adding everything to the same image. Read up on python multi-stage builds.
If we consider best practices instead of multiple RUN use single RUN.
For example
RUN apt-get update -yq \
&& apt-get install -y python3-dev build-essential -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt \
&& apt-get purge -y --auto-remove gcc python3-dev build-essential
you can use multistage builds if you don't require git in your final image you can remove in final stage
Also if possible you can use alpine version also.
Try disabling recommended packages of APT with --no-install-recommends, you can read more about it from here.
Now the image is smaller:
FROM ubuntu:18.04 AS builder
RUN apt update -y
RUN apt install python3-pip -y
RUN apt install build-essential automake pkg-config libtool libffi-dev libgmp-dev -y
RUN apt install libsecp256k1-dev -y
RUN apt install openjdk-8-jre-headless -y
RUN apt install git -y
RUN apt install libkrb5-dev -y
RUN apt install vim -y
RUN mkdir /opt/app
RUN chown -R root:root /opt/app
COPY ["requirements.txt","/opt/app/requirements.txt"]
SHELL ["/bin/bash", "-c"]
WORKDIR /opt/app
RUN pip3 install -r requirements.txt && apt-get -y clean all
RUN mkdir /opt/app/
RUN chown -R root:root /opt/app/
RUN cd /opt/app/
RUN git clone -b master https://bitbucket.org/heroes/test.git
CMD ["bash","/opt/app/bin/connect.sh"]

How to install AWS CLI in docker container based on image “java:8”

I have a Dockerfile that is like:
FROM java:8
LABEL maintainer="CMS"
RUN apt-get install python-pip
RUN pip install awscli
....
.....
[Error: Unable to locate package python-pip]
My end goal is to have java8 and aws-cli installed. Also I don't want to use curl statements in the Dockerfile. Also I don't want to use the plain ubuntu image.
How should I go about doing it?
The error says Pip is not installed. Try installing it properly. If installed try executing same commands to verify.
try to update your docker file to
FROM java:8
LABEL maintainer="CMS"
RUN apt-get update && apt-get install -y \
software-properties-common
RUN add-apt-repository universe
RUN apt-get update && apt-get install -y \
python3.4 \
python3-pip
RUN pip install awscli
....
.....
If you want to base it on top of openjdk:8 image, try the following:
FROM openjdk:8
RUN set -eux; \
apt-get update; \
apt-get install -y --no-install-recommends \
python3-setuptools \
python3-pip \
; \
rm -rf /var/lib/apt/lists/*
RUN pip3 --no-cache-dir install -U awscli
RUN apt-get clean
The other option is to use Alpine distribution:
FROM openjdk:8-alpine
RUN set -eux; \
apk add python3 ; \
pip3 --no-cache-dir install -U awscli
Sources:
https://bitbucket.org/vodkaseledka/openjdk8-awscli
https://bitbucket.org/vodkaseledka/openjdk8-awscli-alpine
Or you can get pre-builds from here:
https://hub.docker.com/repository/docker/savnn/openjdk8-awscli
https://hub.docker.com/repository/docker/savnn/openjdk8-awscli-alpine
this work for me: create dockerfile
FROM openjdk:8-alpine
RUN apk update;
RUN set -eux; \
apk add python3 ; \
pip3 --no-cache-dir install -U awscli; \
pip3 install --upgrade pip;
RUN apk add groff
use docker build . -t aws then run: docker run -it aws /bin/sh

How to run tensorflow project in docker?

I am newbie in docker, but I have searched so much about the problem I am facing.
I am having a code in which I am using tensorflow, PyQt and other packages. Now, I have pulled the tensorflow/tensorflow:1.4.0-gpu-py3 and nvidia/cuda:8.0-cudnn6-runtime. Also I have build the image of my application with some dependencies.
I tried to run all the above images with the docker-compose as below:
version: '3'
services:
nvidia:
image: "nvidia/cuda:8.0-cudnn6-runtime"
tensorflow:
image: "tensorflow/tensorflow:1.4.0-gpu-py3"
app:
image: my_app
But I am getting error ImportError: No module named 'tensorflow'.
Please help me by suggesting the way I should solve this.
Edit:
Following code sample is just few lines of my code.
import sys
from PyQt5 import QtCore, QtGui, QtQml, QtQuick
from OpenGL import GL
import cv2 # .cv2 as cv2
from multiprocessing import Process,Queue, Value, Manager
import os
import tensorflow as tf
Edit:
# Use an official Python runtime as a parent image
FROM ubuntu:16.04
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
RUN \
apt-get update && \
apt-get install -y python python-dev python-pip python-virtualenv && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils && apt-get install -y libgtk2.0-dev python python-dev python3 python3-dev python3-pip
RUN apt-get update && apt-get install -y build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
RUN pip install setuptools pip --upgrade --force-reinstall
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
#RUN apt-get update -y
# Install packages
#RUN apt-get install -y curl
#RUN apt-get install -y postgresql
#RUN apt-get install -y postgresql-client
#RUN apt-get install -y python3-numpy python3-opengl python-qt4 python-qt4-gl
# Run app.py when the container launches
CMD ["python3", "Working.py"]
requirement.txt
PyOpenGL
PyQt5
opencv-python
You have 3 separate docker containers, Nvidia, Tensorflow, And you application.
When you include tensorflow in python application, there is no Tensorflow package there, it is in separate container.
Suggestion is to remove Tensor-flow container, and add app into tensorflow image.
In you Dockerfile change FROM image:
FROM ubuntu:16.04 to FROM tensorflow/tensorflow:1.4.0-gpu-py3
Then change other parts of Dockerfile installation, because tensorflow image already have python3 installed.

Resources