I'm trying to use docker with qemu to build an arm image in my x86 host computer
I use arm64v8/ubuntu as the based image
I make a simple opencv program and try to use ldd command to see the dependencies.
however, ldd always shows:
ldd: exited with unknown exit code (132)
ldd would work if I save this image and load it in an arm computer.
However, my main project's size is too large (or my arm computer's space is too small ) to import to the arm computer, I'd like to use ldd to find out which libraries are really necessary for this project.
I also try nvcr.io/nvidia/l4t-base:r32.4.4 as based image and ldd shows
ldd: exited with unknown exit code (139)
what should I do to use ldd command if i'm in a arm image in the docker and my host computer is x86?
my dockerfile are like this:
FROM arm64v8/ubuntu
ENV TZ=Asia/Taipei
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update && apt-get install -y \
build-essential -y \
cmake -y \
git -y \
libgtk2.0-dev -y \
libjpeg-dev -y \
libpng-dev -y \
libtiff-dev -y \
pkg-config -y && \
apt-get clean
WORKDIR /home/gino/
RUN git clone https://github.com/opencv/opencv.git && \
cd ./opencv && \
mkdir build && \
cd build && \
cmake \
-D BUILD_SHARED_LIBS=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=OFF \
-D FORCE_VTK=OFF \
-D WITH_TBB=OFF \
-D WITH_GDAL=OFF \
-D WITH_V4L=ON \
-D WITH_XINE=OFF \
-D BUILD_EXAMPLES=OFF \
-D ENABLE_PRECOMPILED_HEADERS=OFF \
-D BUILD_DOCS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D BUILD_TESTS=OFF \
-D BUILD_opencv_apps=OFF \
-D CMAKE_INSTALL_PREFIX=/home/gino/opencv_install \
.. && \
make -j8 && \
make install && \
ldconfig && \
apt-get clean
WORKDIR /home/gino/
RUN rm -R ./opencv
ADD ./ /home/gino/cvtest/
WORKDIR /home/gino/cvtest/
RUN make clean all
ENV HOME /home/developer
CMD /home/gino/cvtest/dockertest
main.cpp
#include <iostream>
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat frame;
VideoCapture video(-1);
while(video.isOpened())
{
video>>frame;
imshow("frame",frame);
waitKey(1);
}
cout << "Hello World!" << endl;
return 0;
}
Related
I am trying to run the below docker image from https://hub.docker.com/r/fbcotter/docker-tensorflow-
opencv/
FROM tensorflow/tensorflow:1.8.0-py3
RUN apt-get update
RUN apt-get install -y \
build-essential \
cmake \
git \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libjasper-dev \
libavformat-dev \
libhdf5-dev \
libpq-dev
RUN pip3 --no-cache-dir install \
numpy \
hdf5storage \
h5py \
scipy \
py3nvml
WORKDIR /
ENV OPENCV_VERSION="3.4.1"
RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
&& unzip ${OPENCV_VERSION}.zip \
&& mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
&& cd /opencv-${OPENCV_VERSION}/cmake_binary \
&& cmake -DBUILD_TIFF=ON \
-DBUILD_opencv_java=OFF \
-DWITH_CUDA=OFF \
-DENABLE_AVX=ON \
-DWITH_OPENGL=ON \
-DWITH_OPENCL=ON \
-DWITH_IPP=ON \
-DWITH_TBB=ON \
-DWITH_EIGEN=ON \
-DWITH_V4L=ON \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=$(python3 -c "import sys; print(sys.prefix)") \
-DPYTHON_EXECUTABLE=$(which python3) \
-DPYTHON_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc;
print(get_python_inc())") \
-DPYTHON_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib;
print(get_python_lib())") .. \
&& make install \
&& rm /${OPENCV_VERSION}.zip \
&& rm -r /opencv-${OPENCV_VERSION}
RUN pip3 install -q keras==2.3.1
RUN pip3 install pyzmq
RUN pip3 install pillow
RUN mkdir -p /edge_app/src
WORKDIR /edge_app/src
COPY . ./
#CMD ["python","streamer.py"]
Command to run the docker image
docker run --rm -it -p:ip:port:port test
When I run the above docker image I am able to access it through Jupyter notebook. My question how to disable the jupyter notebook, because I want to access the docker container through bash.
Thanks, help is highly appreciated.
You could directly run your container with a custom command:
docker run -it -p port:port test /bin/bash
Let's take an example. Here, I'm trying to read an image and write it in a temp folder, using opencv. I want to put this desktop application on Docker and save the output using Docker volume. From volumes, I want to save the output to my local machine.
For the problem statement, I assigned a volume to the container, so that I can save the output. When I'm running the code, it's getting executed, but I'm not understanding how to save to local machine.
This is the DockerFile for the opencv example:
FROM python:3.7
RUN apt-get update \
&& apt-get install -y \
build-essential \
cmake \
git \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
RUN pip install numpy
ENV OPENCV_VERSION="4.1.0"
RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
&& unzip ${OPENCV_VERSION}.zip \
&& mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
&& cd /opencv-${OPENCV_VERSION}/cmake_binary \
&& cmake -DBUILD_TIFF=ON \
-DBUILD_opencv_java=OFF \
-DWITH_CUDA=OFF \
-DWITH_OPENGL=ON \
-DWITH_OPENCL=ON \
-DWITH_IPP=ON \
-DWITH_TBB=ON \
-DWITH_EIGEN=ON \
-DWITH_V4L=ON \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
.. \
&& make install \
&& rm /${OPENCV_VERSION}.zip \
&& rm -r /opencv-${OPENCV_VERSION}
RUN ln -s \
/usr/local/python/cv2/python-3.7/cv2.cpython-37m-x86_64-linux-gnu.so \
/usr/local/lib/python3.7/site-packages/cv2.so
WORKDIR /opencv_example
COPY . .
I need some help to understand how Docker volumes are used for desktop applications and the code to save the volume's output in local path.
How are you starting the container? Paste your command.
If you're using Windows, you need to enable shared drives in Docker settings, and then start your container like written below. If you're on MacOS or Linux, then you only need to execute the command. (Given you probably have other flags there as well)
docker run -v <path-on-host>:<path-inside-container> <image-name>
For more reference check out this link.
I am trying to run riofs inside a docker container, however when I try to run riofs I get the following error:
fuse: device not found, try 'modprobe fuse' first
ERROR! Failed to mount FUSE partition !
ERROR! Failed to create FUSE fs ! Mount point: /path/to/dir
Here is what my DockerFile looks like:
FROM ubuntu:16.04
RUN apt-get update -qq
RUN apt-get install -y \
build-essential \
gcc \
make \
automake \
autoconf \
libtool \
pkg-config \
intltool \
libglib2.0-dev \
libfuse-dev \
libxml2-dev \
libevent-dev \
libssl-dev \
&& rm -rf /var/lib/apt/lists/*
RUN curl -L https://github.com/skoobe/riofs/archive/v${VERSION}.tar.gz | tar zxv -C /usr/src
RUN cd /usr/src/riofs-${VERSION} && ./autogen.sh && ./configure --prefix=/usr && make && make install
WORKDIR /opt/riofs/bin
CMD ["bash"]
I needed to add the runtime privilege SYS_ADMIN because fuse needs permissions to mount/umount.
docker run -it --cap-add SYS_ADMIN --device /dev/fuse [IMAGE] bash
I am using the polinux/httpd:centos repo to run Apache and PHP 7.1. The build seems to go ok. There are a few warnings related to keys, but none related to php or pgsql. The build completes successfully, but when I ssh into the container the module is not listed (php -m) and there's no extension config file in php.d.
I verified it is listed in the Dockerfile multiple times.
I can install php71-php-pgsql manually after starting the container, but then I can't restart Apache without restarting the container.
I've tried moving yum install php71-php-pgsql to the end of the Dockerfile as a separate RUN command (in addition to the original), but it reports it has already been installed, yet when I ssh into the container its not listed in the modules and no config, as mentioned above.
When I rebuild a container I stop and remove it, then run build with the no-cache option.
I'm stumped...
The Dockerfile is quite long, but I can post if that would be helpful.
Thanks.
UPDATE: Dockerfile per request...
FROM polinux/httpd:centos
ENV \
NVM_DIR="/usr/local/nvm" \
NODE_VERSION="9.2.0" \
GIT_VERSION="2.15.0" \
PHP_VERSION="71"
ADD mariadb.repo /etc/yum.repos.d/mariadb.repo
RUN \
rpm --rebuilddb && yum clean all && rm -rf /var/cache/yum && \
yum update -y && \
yum install -y \
wget \
patch \
bzip2 \
unzip \
make \
openssh-clients \
git \
MariaDB-client && \
rpm -Uvh http://rpms.remirepo.net/enterprise/remi-release-7.rpm && \
yum install -y \
php${PHP_VERSION}-php \
php${PHP_VERSION}-php-bcmath \
php${PHP_VERSION}-php-cli \
php${PHP_VERSION}-php-common \
php${PHP_VERSION}-php-devel \
php${PHP_VERSION}-php-fpm \
php${PHP_VERSION}-php-gd \
php${PHP_VERSION}-php-gmp \
php${PHP_VERSION}-php-intl \
php${PHP_VERSION}-php-json \
php${PHP_VERSION}-php-mbstring \
php${PHP_VERSION}-php-mcrypt \
php${PHP_VERSION}-php-mysqlnd \
php${PHP_VERSION}-php-pgsql \
php${PHP_VERSION}-php-opcache \
php${PHP_VERSION}-php-pdo \
php${PHP_VERSION}-php-pear \
php${PHP_VERSION}-php-process \
php${PHP_VERSION}-php-pspell \
php${PHP_VERSION}-php-xml \
php${PHP_VERSION}-php-pecl-imagick \
php${PHP_VERSION}-php-pecl-mysql \
php${PHP_VERSION}-php-pecl-uploadprogress \
php${PHP_VERSION}-php-pecl-uuid \
php${PHP_VERSION}-php-pecl-memcache \
php${PHP_VERSION}-php-pecl-memcached \
php${PHP_VERSION}-php-pecl-redis \
php${PHP_VERSION}-php-pecl-zip && \
ln -sfF /opt/remi/php${PHP_VERSION}/enable /etc/profile.d/php${PHP_VERSION}-paths.sh && \
ln -sfF /opt/remi/php${PHP_VERSION}/root/usr/bin/{pear,pecl,phar,php,php-cgi,php-config,phpize} /usr/local/bin/. && \
mv -f /etc/opt/remi/php${PHP_VERSION}/php.ini /etc/php.ini && ln -s /etc/php.ini /etc/opt/remi/php${PHP_VERSION}/php.ini && \
rm -rf /etc/php.d && mv /etc/opt/remi/php${PHP_VERSION}/php.d /etc/. && ln -s /etc/php.d /etc/opt/remi/php${PHP_VERSION}/php.d && \
yum install -y \
ImageMagick \
GraphicsMagick \
gcc \
gcc-c++ \
libffi-devel \
libpng-devel \
zlib-devel && \
yum install -y ruby ruby-devel && \
echo 'gem: --no-document' > /etc/gemrc && \
gem update --system && \
gem install bundler && \
export PROFILE=/etc/profile.d/nvm.sh && touch $PROFILE && \
curl -sSL https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash && \
source $NVM_DIR/nvm.sh && \
nvm install $NODE_VERSION && \
nvm alias default $NODE_VERSION && \
nvm use default && \
npm install -g \
gulp \
grunt-cli \
bower \
browser-sync && \
echo -e "StrictHostKeyChecking no" >> /etc/ssh/ssh_config && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
chown apache /usr/local/bin/composer && composer --version && \
yum clean all && rm -rf /tmp/yum* && \
sed -i 's|SetHandler application/x-httpd-php|SetHandler "proxy:fcgi://127.0.0.1:9000"|g' /etc/httpd/conf.d/php${PHP_VERSION}-php.conf
ADD container-files /
ENV \
NODE_PATH=$NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules \
PATH=$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
RUN \
mkdir -p /data/tmp/php && \
chmod -R 777 /data/tmp
# Weird issue: For some reason pgsql is not installed above. May be OoO...
Manually installing worked, so adding here at the end.
RUN \
yum install php71-php-pgsql -y
Chalk this up to inexperience...
This turned out to be a problem with how I was referencing images, first when building and then running the instance. I'm still not sure I fully understand, but I think I was running the base container instead of the modified build.
For others that may run into similar problems, the 2 commands that helped me get this working are:
docker build --rm -t local/httpd-php71 .
... and then ...
docker run \
-d \
--name httpd-php71 \
--restart unless-stopped \
--net dockersubnet \
--volume /www:/var/www \
local/httpd-php71
Where 'local/httpd-php71' is my local/custom build. Before I was not using any tag reference in the build command and then I was referencing 'polinux/httpd:centos', the base, in the run command.
Thanks.
What do I have to do to run the Turtlebot 3 Simulation as described in http://emanual.robotis.com/docs/en/platform/turtlebot3/simulation/ on a debian:buster system using docker?
The steps for debian:stretch using repositories in http://wiki.ros.org/lunar/Installation/Debian are not working with debian:buster, see https://github.com/ros-infrastructure/rospkg/issues/125
I finally found a solution to my question.
Create a Dockerfile as follows:
FROM osrf/ros:kinetic-desktop-full-jessie
RUN apt-get update && apt-get install -y --no-install-recommends screen
RUN apt-get install -y --no-install-recommends \
ros-kinetic-joy \
ros-kinetic-teleop-twist-joy \
ros-kinetic-teleop-twist-keyboard \
ros-kinetic-laser-proc \
ros-kinetic-rgbd-launch \
ros-kinetic-depthimage-to-laserscan \
ros-kinetic-rosserial-arduino \
ros-kinetic-rosserial-python \
ros-kinetic-rosserial-server \
ros-kinetic-rosserial-client \
ros-kinetic-rosserial-msgs \
ros-kinetic-amcl \
ros-kinetic-map-server \
ros-kinetic-move-base \
ros-kinetic-urdf \
ros-kinetic-xacro \
ros-kinetic-compressed-image-transport \
ros-kinetic-rqt-image-view \
ros-kinetic-gmapping \
ros-kinetic-navigation
RUN mkdir -p /root/catkin_ws/src/ \
&& cd /root/catkin_ws/src/ \
&& git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git \
&& git clone https://github.com/ROBOTIS-GIT/turtlebot3.git \
&& git clone https://github.com/ROBOTIS-GIT/turtlebot3_simulations.git
RUN /bin/bash -c "source /opt/ros/kinetic/setup.bash; cd /root/catkin_ws && /opt/ros/kinetic/bin/catkin_make && /opt/ros/kinetic/bin/catkin_make -DCMAKE_INSTALL_PREFIX=/opt/ros/kinetic install"
RUN apt-get install -y --no-install-recommends vim bash-completion sudo
RUN apt-get install -y --no-install-recommends apt-utils
RUN useradd --create-home --shell /bin/bash robo
RUN echo "robo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers.d/robo && chmod 0440 /etc/sudoers.d/robo
COPY ./start_simu.sh /usr/local/bin
RUN chmod 755 /usr/local/bin/start_simu.sh
USER robo
WORKDIR /home/robo
RUN rm -rf /var/lib/apt/lists/*
Add a file start_simu.sh in the same directory, containing:
#!/bin/bash
screen -dmS turtlebot_fake /bin/bash -c "source /opt/ros/kinetic/setup.bash;env TURTLEBOT3_MODEL=burger roslaunch turtlebot3_fake turtlebot3_fake.launch"
sleep 2
screen -S turtlebot_fake -X screen /bin/bash -c "source /opt/ros/kinetic/setup.bash;env TURTLEBOT3_MODEL=burger roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch"
source "/opt/ros/$ROS_DISTRO/setup.bash"
exec "/bin/bash"
Now built your docker image using sudo docker build --tag ros:turtlebot3_fake_node .
Run the image:
xhost +local:root
sudo docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-uni
x:rw" ros:turtlebot3_fake_node /usr/local/bin/start_simu.sh
If image is stopped, do xhost -local:root.
The simulation is running in screen, connect to it via screen -R.