Trying to run curl from within my container but getting this error, i already copied into the container after installing curl so not sure what I am missing
FROM debian:stretch
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
cmake \
curl \
make \
wget \
unzip \
bash \
jq \
libssl1.0-dev \
libasl-dev \
libsasl2-dev \
pkg-config \
libsystemd-dev \
zlib1g-dev
COPY /lib/x86_64-linux-gnu/libcom_err.so* /lib/x64_64-linux-gnu/`
docker run -it --env LD_LIBRARY_PATH="/usr/lib/x64_64-linux-gnu/" myimage .
#curl
curl: error while loading shared libraries: libcom_err.so.2: cannot open shared object file: No such file or directory
Instead of trying to copy it, just add it to the apt install phase, and let apt figure out how to get it and resolve dependencies for it:
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
cmake \
curl \
make \
wget \
unzip \
bash \
jq \
libcomerr2 \
libssl1.0-dev \
libasl-dev \
libsasl2-dev \
pkg-config \
libsystemd-dev \
zlib1g-dev
Related
I got strange errors while compiling my project within the docker container when I am using -j option of make.
The errors look like this:
/root/projects/obj/linux_debug/src/myos_make/libmyos.a: error adding symbols: Cannot allocate memory
/usr/protobuf-3.9.2/lib/libprotobuf.a: error adding symbols: Bad address
Compiling the same project inside the container without -j and outside the container with -j passes ok.
The Dockerfile :
#
# This docker must be built from projects/src folder as context
#
FROM ubuntu:18.04 as base
# copy the dockerfile to make it possible to keep track of the cntent of the image
COPY ./docker_build/docker_files/Dockerfile.ubuntu18.04-builder $HOME/.
# Disable Prompt During Packages Installation
ARG DEBIAN_FRONTEND=noninteractive
# Update Ubuntu Software repository
RUN apt update -y && apt upgrade -y
# Install base utils
RUN apt install -y \
nano \
wget \
sudo \
curl \
&& apt clean -y
# Install additional packages needed for build.
RUN apt install -y \
nasm \
pkg-config \
bc \
python \
python3 \
python3-pip \
sshpass \
libapr1 \
libapr1-dev \
&& apt clean -y
RUN apt install -y \
rpm \
libaio-dev \
libnuma-dev \
numactl \
valgrind \
openssl \
libssl-dev \
ldap-utils \
libldap2-dev \
libncurses5-dev \
libncursesw5-dev \
uuid-dev \
ncurses-base \
expat \
libfuse-dev \
cmake \
build-essential \
autotools-dev \
autoconf \
automake \
doxygen \
linux-headers-4.15.0-173-generic \
g++-multilib \
lib32z1-dev \
libasan4 \
software-properties-common \
libtool \
unzip \
xsltproc \
&& apt clean -y
RUN ln -s /usr/bin/doxygen /bin/doxygen
# install GCC compiler
RUN add-apt-repository ppa:ubuntu-toolchain-r/test -y \
&& apt update -y\
&& apt install gcc-10 -y \
&& apt install g++-10 -y \
&& apt clean -y && \
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 60 --slave /usr/bin/g++ g++ /usr/bin/g++-10 && \
update-alternatives --config gcc
RUN curl https://bootstrap.pypa.io/pip/2.7/get-pip.py --output get-pip.py && \
python get-pip.py
RUN ln -sf /usr/local/bin/pip2 /usr/bin/pip
RUN pip install \
pyyaml \
jinja2 \
gcovr
FROM base as env-prepare
COPY external/protobuf /protobuf
RUN cd protobuf && ./build_proto.sh
FROM base as builder
COPY --from=env-prepare /usr/protobuf-3.9.2/ /usr/protobuf-3.9.2/
COPY --from=env-prepare /usr/protobuf-c-1.3.2/ /usr/protobuf-c-1.3.2/
COPY docker_build/docker-image-release /etc/docker-image-release
ARG UID=1000
USER ${UID}
# This entrypoint is to make everything run inside bash, so the devtools script will alway run before the command
ENTRYPOINT [ "/bin/bash", "-c" ]
I tried to integrate an application - QCPump - inside an existing Docker, with an other application - QAtrack+. The goal is to use QCPump inside QAtrack+.
The application code seems to be integrated but when I launch it, I have an error :
ImportError: libjpeg.so.8: cannot open shared object file: No such file or directory
The error is raised by the wxPython package.
Okay, so I have to install it. Unfortunately, my Docker linux is Debian 11, and Debian seems to grab this package several years ago. So, after some reseach, I found that this package is "replaced" - for Debian - by libjpeg-dev. So, I did it. And same result ...
I found the code of the librairy (wxPython) and a docker part has done for Debian 10 : https://github.com/wxWidgets/Phoenix/blob/master/docker/build/debian-10/Dockerfile
I took this part and integrated it in my DockerFile :
RUN apt-get install -y \
freeglut3 \
freeglut3-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libgstreamer-plugins-base1.0-dev \
libgtk-3-dev \
libjpeg-dev \
libnotify-dev \
libsdl2-dev \
libsm-dev \
libtiff-dev \
libwebkit2gtk-4.0-dev \
libxtst-dev; \
apt-get clean;
But same ...
In some forum, people mentionned the LD have to be update. I tried this way but I am not pretty sure :
RUN export LD_LIBRARY_PATH=/usr/local/lib
And to be honest, I am not sure this is the problem and so the solution here...
Any idea about this problem ?
Following, my complete DockerFile if you need it ;)
FROM python:3.6
RUN echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' > /etc/apt/sources.list.d/pgdg.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get update && apt-get install -y \
cron postgresql-client-10 cifs-utils dos2unix \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get install tzdata
ENV TZ 'Europe/Paris'
RUN dpkg-reconfigure -f noninteractive tzdata
RUN touch /root/.is_inside_docker
RUN pip install virtualenv
RUN date "+%H:%M:%S %d/%m/%y"
RUN apt-get -q update && \
apt-get install -yq chromium && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update -y && apt-get install -y libsdl2-ttf-2.0-0 && \
apt-get update -y && apt-get install -y libjpeg-dev libaio1 libaio-dev && \
wget -q -O /tmp/libpng12.deb http://mirrors.kernel.org/ubuntu/pool/main/libp/libpng/libpng12- 0_1.2.54-1ubuntu1_amd64.deb \
&& dpkg -i /tmp/libpng12.deb \
&& rm /tmp/libpng12.deb \
&& apt-get install -y \
freeglut3 \
freeglut3-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libgstreamer-plugins-base1.0-dev \
libgtk-3-dev \
libjpeg-dev \
libnotify-dev \
libsdl2-dev \
libsm-dev \
libtiff-dev \
libwebkit2gtk-4.0-dev \
libxtst-dev; \
apt-get clean;
RUN export LD_LIBRARY_PATH=/usr/local/lib
WORKDIR /usr/src/qatrackplus
I'm currently trying to run an application using Docker but get the following error message when I start the application:
error while loading shared libraries: libopencv_highgui.so.4.4: cannot open shared object file: No such file or directory
I assume that something is going wrong in the docker file and that the installation is not complete or correct. Therefore I have added the section about OpenCV at the end of the post.
Did I miss an important step or an error in the dockerfile?
FROM nvidia/cuda:10.2-devel-ubuntu18.04 as TOOLKITS
RUN apt-get update && apt-get install -y apt-utils
# Install additional packages
RUN apt-get install -y \
build-essential \
bzip2 \
checkinstall \
cmake \
curl \
gcc \
gfortran \
git \
pkg-config \
python3-pip \
python3-dev \
python3-numpy \
nano \
openexr \
unzip \
wget \
yasm
FROM TOOLKITS as GIT_PULLS
WORKDIR /
RUN git clone https://github.com/opencv/opencv.git
RUN git clone https://github.com/opencv/opencv_contrib.git
FROM GIT_PULLS as OPENCV_PREPERATION
RUN apt-get install -y \
libgtk-3-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libxvidcore-dev \
libx264-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libatlas-base-dev \
libtbb2 \
libtbb-dev \
libdc1394-22-dev
FROM OPENCV_PREPERATION as OPENCV_CMAKE
WORKDIR /
RUN mkdir /opencv/build
WORKDIR /opencv/build
RUN cmake \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=/usr/local \
-DINSTALL_C_EXAMPLES=ON \
-DINSTALL_PYTHON_EXAMPLES=ON \
-DWITH_TBB=ON \
-DWITH_V4L=ON \
-DOPENCV_GENERATE_PKGCONFIG=ON \
-DWITH_OPENGL=ON \
-DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-DOPENCV_PC_FILE_NAME=opencv.pc \
-DBUILD_EXAMPLES=ON ..
FROM OPENCV_CMAKE as BUILD_OPENCV_MAKE
RUN make -j $(nproc)
RUN make install
FROM TOOLKITS
COPY --from=XXX /opencv /opencv
COPY --from=XXX /opencv_contrib /opencv_contrib
I was facing the same issue before when installing OpenCV in Docker with Python image. You probably don't need this much dependencies but it's an option. I will have a lightweight version that fits my case. Please give a try for the following code:
Heavy-loaded version:
FROM python:3.7
RUN apt-get update \
&& apt-get install -y \
build-essential \
cmake \
git \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
RUN pip install numpy
WORKDIR /
ENV OPENCV_VERSION="4.1.1"
# install opencv-python from its source
RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
&& unzip ${OPENCV_VERSION}.zip \
&& mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
&& cd /opencv-${OPENCV_VERSION}/cmake_binary \
&& cmake -DBUILD_TIFF=ON \
-DBUILD_opencv_java=OFF \
-DWITH_CUDA=OFF \
-DWITH_OPENGL=ON \
-DWITH_OPENCL=ON \
-DWITH_IPP=ON \
-DWITH_TBB=ON \
-DWITH_EIGEN=ON \
-DWITH_V4L=ON \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
.. \
&& make install \
&& rm /${OPENCV_VERSION}.zip \
&& rm -r /opencv-${OPENCV_VERSION}
RUN ln -s \
/usr/local/python/cv2/python-3.7/cv2.cpython-37m-x86_64-linux-gnu.so \
/usr/local/lib/python3.7/site-packages/cv2.so
RUN apt-get --fix-missing update && apt-get --fix-broken install && apt-get install -y poppler-utils && apt-get install -y tesseract-ocr && \
apt-get install -y libtesseract-dev && apt-get install -y libleptonica-dev && ldconfig && apt install -y libsm6 libxext6 && apt install -y python-opencv
Lightweight version:
FROM python:3.7
RUN apt-get update -y
RUN apt-update && apt install -y libsm6 libxext6
For my case, I ended up using the heavy-loaded version just to save some hassle and both versions should work fine. For your reference, please also see this link and thanks to Neo Anderson's great help.
I also had lots of issues in this process, and found this repository:
https://github.com/janza/docker-python3-opencv
Clone or download this and add the additional dependencies and files according to your requirement.
apt-get update -y
apt install -y libsm6 libxext6
apt update
pip install pyglview
apt install -y libgl1-mesa-glx
I'm looking for ideas as to why this is failing.
I've done the same steps on my host and they work well, the packages I'm calling out provide the needed dependencies.
Build Command:
docker build -t testbuild .
DockerFile:
FROM registry.redhat.io/rhel7:latest
RUN yum install -y yum-utils
RUN yum-config-manager --enable \
EPEL_7_EPEL_7 \
Community_mysql-connectors-community \
Community_mysql-tools-community \
Community_mysql57-community \
rhel-7-server-extras-rpms/x86_64 \
rhel-7-server-optional-rpms/7Server/x86_64 \
rhel-7-server-rh-common-rpms/7Server/x86_64 \
rhel-7-server-rpms/7Server/x86_64 \
rhel-server-rhscl-7-rpms/7Server/x86_64
RUN true \
&& yum install -y \
cairo \
yum-utils\
collectd \
openldap-devel \
rrdtool \
gcc \
rrdtool-devel \
pyrrd \
rrdtool-python \
python-ldap \
wget \
pycairo-devel \
pycairo \
python-devel \
collectd-nginx \
findutils \
rrdtool \
logrotate \
memcached \
nginx \
nodejs \
npm \
redis \
pkgconfig \
sqlite \
expect \
git \
python3\
python3-devel\
libffi-devel \
postgresql-devel \
postgresql-devel \
mysql-community-client \
mysql-community-libs \
mysql-community-common \
mysql-community-libs-compat
RUN yum clean all
RUN pip3 install \
virtualenv\
# && /usr/bin/easy_install virtualenv \
&& /usr/local/bin/virtualenv /opt/graphite \
&& . /opt/graphite/bin/activate \
&& pip3 install \
PyMySQL \
django==1.11.24 \
django-statsd-mozilla \
fadvise \
gunicorn \
msgpack-python \
redis \
rrdtool \
python-ldap \
mysqlclient \
psycopg2
The error I am getting is ERROR: Command errored out with exit status 1:
command: /opt/graphite/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] =
'"'"'/tmp/pip-install-67zlal6a/rrdtool/setup.py'"'"';
file='"'"'/tmp/pip-install-67zlal6a/rrdtool/setup.py'"'"';f=getattr(tokenize,
'"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"',
'"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))'
egg_info --egg-base pip-egg-info
cwd: /tmp/pip-install-67zlal6a/rrdtool/
Complete output (5 lines):
/tmp/tmp_python_rrdtoolfr1k715h/test_rrdtool.c:2:17: fatal error: rrd.h: No such file or directory
#include
^
compilation terminated.
Error: Unable to compile the binary module. Do you have the rrdtool header and libraries installed?
---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for
full command output.
But I've ensured the files exist with the yum-installed packages.
I've commented out this package and then I get a mysql_config error which I also don't get on my current test host. Which leads me to believe something is going wrong earlier in the build.
Any Ideas?
Don't have access to RHEL repository, so I tried with centos 7 and was having same issues.
But installing these additional packages helped to successfully build image:
RUN yum install -y epel-release
RUN yum install -y python36-pip
RUN yum install -y mysql mysql-devel
RUN yum install -y python36-devel
Here is final DockerFile:
FROM centos:7
RUN yum install -y yum-utils
RUN yum-config-manager --enable \
EPEL_7_EPEL_7 \
Community_mysql-connectors-community \
Community_mysql-tools-community \
Community_mysql57-community \
rhel-7-server-extras-rpms/x86_64 \
rhel-7-server-optional-rpms/7Server/x86_64 \
rhel-7-server-rh-common-rpms/7Server/x86_64 \
rhel-7-server-rpms/7Server/x86_64 \
rhel-server-rhscl-7-rpms/7Server/x86_64
RUN true \
&& yum install -y \
cairo \
yum-utils\
collectd \
openldap-devel \
rrdtool \
gcc \
rrdtool-devel \
pyrrd \
rrdtool-python \
python-ldap \
wget \
pycairo-devel \
pycairo \
python-devel \
collectd-nginx \
findutils \
rrdtool \
logrotate \
memcached \
nginx \
nodejs \
npm \
redis \
pkgconfig \
sqlite \
expect \
git \
python3\
python3-devel\
libffi-devel \
postgresql-devel \
postgresql-devel \
mysql-community-client \
mysql-community-libs \
mysql-community-common \
mysql-community-libs-compat
RUN yum install -y epel-release
RUN yum install -y python36-pip
RUN yum install -y mysql mysql-devel
RUN yum install -y python36-devel
RUN yum clean all
RUN pip3 install \
virtualenv\
&& /usr/local/bin/virtualenv /opt/graphite \
&& . /opt/graphite/bin/activate \
&& pip3 install \
PyMySQL \
django==1.11.24 \
django-statsd-mozilla \
fadvise \
gunicorn \
msgpack-python \
redis \
rrdtool \
python-ldap \
mysqlclient \
psycopg2
I'm trying to install OpenVino on my Raspberry using Docker.
I have this Dockerfile:
FROM raspbian/stretch
ARG INSTALL_DIR="/opt/intel/inference_engine_vpu_arm"
RUN apt-get -y update \
&& DEBIAN_FRONTEND=noninteractive && apt-get -y upgrade && apt-get autoremove && \
apt-get install -y \
apt-transport-https \
build-essential \
cmake \
cpio \
lsb-release \
pciutils \
python3.5 \
python3.5-dev \
python3-pip \
python3-setuptools \
ffmpeg \
libjpeg-dev \
libtiff5-dev \
libjasper-dev \
libpng12-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libxvidcore-dev \
libx264-dev \
libgtk2.0-dev \
libgtk-3-dev \
libatlas-base-dev \
gfortran \
libgstreamer1.0-0 \
libgstreamer-plugins-base1.0-0
RUN usermod -a -G users "$(whoami)"
COPY inference_engine_vpu_arm $INSTALL_DIR
RUN sed -i "s|<INSTALLDIR>|$INSTALL_DIR|" $INSTALL_DIR/bin/setupvars.sh && \
echo "source $INSTALL_DIR/bin/setupvars.sh" >> $HOME/.bashrc
RUN ["/bin/bash", "-c", "source $INSTALL_DIR/bin/setupvars.sh && /bin/bash $INSTALL_DIR/install_dependencies/install_NCS_udev_rules.sh"]
RUN pip3 install numpy
RUN apt autoremove -y && \
rm -rf /var/lib/apt/lists/*
CMD ["/bin/bash"]
But I have this error when I try to build:
E: Unable to correct problems, you have held broken packages.
The command '/bin/sh -c apt-get -y update..... returned a non-zero code: 100
Do you have any idea?
Thanks
After a some google search it seems the error happens because the apt daemon is not able to connect to the configured repositories. This is likely since the base image was not updated for a while as i can see on docker hub.
If you not familiar with the available repositories you can generate them easily with online tools such as: https://debgen.simplylinux.ch/index.php?generate
You can put them into the docker image with a simple COPY command like
COPY sources.list /etc/apt/sources.list
where the first argument refers to a local file, the second to the docker image