mpicc takes long time in alpine container - docker

I have a docker container running alpine:latest in which I installed build-base, openmpi, openmpi-dev,.. and basically everything works fine, except when I run
mpicc -v -time=time_out -o /root/cloud/test /root/cloud/mpi_hello_world.c
The preprocessing stage [-E] takes ~90sec for the first time. Second time is less than a second. I attached the -v option to mpiccbelow. Please note that the produced executable runs fine and fast with all my nodes/slots.
What I tried to fix this issue was looking at the verbose output of mpicc -v [...] and between
...
End of search list.
<---- Between these two lines we spend ~85sec estimated ---->
GNU C17 (Alpine 10.3.1_git20211027) version 10.3.1 20211027 (x86_64-alpine-linux-musl)
...
we loose time. I have a hunch, that gcc searches for something which it eventually finds. But I dont know what it is.
Can please someone help me identify the missing element?
Please see the output of the mpicc -v [...] command:
bash-5.1# mpicc -v -time=time_out -o /root/cloud/test /root/cloud/mpi_hello_world.c | tee /root/myFiles/mpicc_verbose
Using built-in specs.
COLLECT_GCC=/usr/bin/gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/lto-wrapper
Target: x86_64-alpine-linux-musl
Configured with: /home/buildozer/aports/main/gcc/src/gcc-10.3.1_git20211027/configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --build=x86_64-alpine-linux-musl --host=x86_64-alpine-linux-musl --target=x86_64-alpine-linux-musl --with-pkgversion='Alpine 10.3.1_git20211027' --enable-checking=release --disable-fixed-point --disable-libstdcxx-pch --disable-multilib --disable-nls --disable-werror --disable-symvers --enable-__cxa_atexit --enable-default-pie --enable-default-ssp --enable-cloog-backend --enable-languages=c,c++,d,objc,go,fortran,ada --disable-libssp --disable-libmpx --disable-libmudflap --disable-libsanitizer --enable-shared --enable-threads --enable-tls --with-system-zlib --with-linker-hash-style=gnu
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 10.3.1 20211027 (Alpine 10.3.1_git20211027)
COLLECT_GCC_OPTIONS='-v' '-o' '/root/cloud/test' '-mtune=generic' '-march=x86-64'
/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/cc1 -quiet -v /root/cloud/mpi_hello_world.c -quiet -dumpbase mpi_hello_world.c -mtune=generic -march=x86-64 -auxbase mpi_hello_world -version -o /tmp/ccdhIMIE.s
GNU C17 (Alpine 10.3.1_git20211027) version 10.3.1 20211027 (x86_64-alpine-linux-musl)
compiled by GNU C version 10.3.1 20211027, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.22-GMP
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
ignoring nonexistent directory "/usr/local/include"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/include/fortify
/usr/include
/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/include
End of search list.
GNU C17 (Alpine 10.3.1_git20211027) version 10.3.1 20211027 (x86_64-alpine-linux-musl)
compiled by GNU C version 10.3.1 20211027, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.22-GMP
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
Compiler executable checksum: 3193578801129247e8be66bd6dd0fe05
COLLECT_GCC_OPTIONS='-v' '-o' '/root/cloud/test' '-mtune=generic' '-march=x86-64'
/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/as -v --64 -o /tmp/ccFKfcmE.o /tmp/ccdhIMIE.s
GNU assembler version 2.37 (x86_64-alpine-linux-musl) using BFD version (GNU Binutils) 2.37
COMPILER_PATH=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/
LIBRARY_PATH=/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-o' '/root/cloud/test' '-mtune=generic' '-march=x86-64'
/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/collect2 -plugin /usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/liblto_plugin.so -plugin-opt=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/lto-wrapper -plugin-opt=-fresolution=/tmp/ccmkCMLh.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --eh-frame-hdr --hash-style=gnu -m elf_x86_64 --as-needed -dynamic-linker /lib/ld-musl-x86_64.so.1 -pie -z relro -z now -o /root/cloud/test /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/Scrt1.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/crti.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/crtbeginS.o -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1 -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib -L/lib/../lib -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../.. /tmp/ccFKfcmE.o -rpath /usr/lib --enable-new-dtags -lmpi -lssp_nonshared -lgcc --push-state --as-needed -lgcc_s --pop-state -lc -lgcc --push-state --as-needed -lgcc_s --pop-state /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/crtendS.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/crtn.o
COLLECT_GCC_OPTIONS='-v' '-o' '/root/cloud/test' '-mtune=generic' '-march=x86-64'
Also here my time_out file:
0.030072 0.006682 cc1 -quiet -v /root/cloud/mpi_hello_world.c -quiet -dumpbase mpi_hello_world.c -mtune=generic -march=x86-64 -auxbase mpi_hello_world -version -o /tmp/ccdhIMIE.s
0.002234 0.0017 as -v --64 -o /tmp/ccFKfcmE.o /tmp/ccdhIMIE.s
0.009905 0.011814 collect2 -plugin /usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/liblto_plugin.so -plugin-opt=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/lto-wrapper -plugin-opt=-fresolution=/tmp/ccmkCMLh.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --eh-frame-hdr --hash-style=gnu -m elf_x86_64 --as-needed -dynamic-linker /lib/ld-musl-x86_64.so.1 -pie -z relro -z now -o /root/cloud/test /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/Scrt1.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/crti.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/crtbeginS.o -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1 -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib -L/lib/../lib -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../.. /tmp/ccFKfcmE.o -rpath /usr/lib --enable-new-dtags -lmpi -lssp_nonshared -lgcc --push-state --as-needed -lgcc_s --pop-state -lc -lgcc --push-state --as-needed -lgcc_s --pop-state /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/crtendS.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/crtn.o
There doesnt seem to be a problem in the time_out, I mean its fast.
Code is from here: mpi-hello-world/code
Thank you <3
Edit: Please see the Dockerfile
FROM amd64/alpine#sha256:a777c9c66ba177ccfea23f2a216ff6721e78a662cd17019488c417135299cd89 as node
ARG USER=mpiuser
ARG SSH_PATH=/etc/ssh
RUN ping -c 2 8.8.8.8
RUN apk add --no-cache \
bash \
build-base \
libc6-compat \
openmpi openmpi-dev\
openssh \
openrc \
nfs-utils \
neovim \
tini
RUN rm -rf /var/cache/apk
#https://wiki.alpinelinux.org/wiki/Setting_up_a_nfs-server
#https://wiki.alpinelinux.org/wiki/Setting_up_a_SSH_server
RUN adduser -S ${USER} -g "MPI Test User" -s /bin/ash -D ${USER} \
&& echo "${USER} ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers \
&& echo ${USER}:* | chpasswd \
&& echo root:* | chpasswd
RUN mkdir ~/.ssh \
# && rc-update add sshd \
# && rc-status \
# touch softlevel because system was initialized without openrc
&& echo "PermitRootLogin yes" >> ${SSH_PATH}/sshd_config \
&& echo "PubkeyAuthentication yes" >> ${SSH_PATH}/sshd_config \
&& echo "StrictHostKeyChecking no" >> ${SSH_PATH}/ssh_config \
&& rm /etc/motd
COPY --chmod=770 ./node_script/helper_node.sh /root/
RUN mkdir ~/cloud
# Using tini - All Tini does is spawn a single child (Tini is meant to be run in a container), and wait for it to exit all the while reaping zombies and performing signal forwarding.
# Docu: https://github.com/krallin/tini
ENTRYPOINT ["/sbin/tini", "-g", "-e 143" ,"-e 137", "--", "/root/helper_node.sh"]
And also /root/helper_node.sh:
# Start sshd i.e. ssh server but gracefully make it shout up
/usr/sbin/sshd -D -d -h /root/.ssh/id_rsa -f /etc/ssh/sshd_config > /dev/null 2>&1
Launch with docker-compose: docker-compose rm -fsv;docker-compose build && docker compose up --scale node=4
Edit 2 - Reproduce with mpicc -E; mpicc -S; mpicc -C (commands omitted due to readability) and we see the same behaviour.
But funny observation mpicc -v -E [...] gives:
mpicc -v -E -o test.i /root/cloud/mpi_hello_world.c
Using built-in specs.
COLLECT_GCC=/usr/bin/gcc
Target: x86_64-alpine-linux-musl
Configured with: /home/buildozer/aports/main/gcc/src/gcc-10.3.1_git20211027/configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --build=x86_64-alpine-linux-musl --host=x86_64-alpine-linux-musl --target=x86_64-alpine-linux-musl --with-pkgversion='Alpine 10.3.1_git20211027' --enable-checking=release --disable-fixed-point --disable-libstdcxx-pch --disable-multilib --disable-nls --disable-werror --disable-symvers --enable-__cxa_atexit --enable-default-pie --enable-default-ssp --enable-cloog-backend --enable-languages=c,c++,d,objc,go,fortran,ada --disable-libssp --disable-libmpx --disable-libmudflap --disable-libsanitizer --enable-shared --enable-threads --enable-tls --with-system-zlib --with-linker-hash-style=gnu
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 10.3.1 20211027 (Alpine 10.3.1_git20211027)
COLLECT_GCC_OPTIONS='-v' '-E' '-o' 'test.i' '-mtune=generic' '-march=x86-64'
/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/cc1 -E -quiet -v /root/cloud/mpi_hello_world.c -o test.i -mtune=generic -march=x86-64
ignoring nonexistent directory "/usr/local/include"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/include/fortify
/usr/include
/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/include
End of search list.
<------------------ Wait time here ------------------->
COMPILER_PATH=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/
LIBRARY_PATH=/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-E' '-o' 'test.i' '-mtune=generic' '-march=x86-64'
Temporary fix - If I add
export COMPILER_PATH=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/
export LIBRARY_PATH=/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../:/lib/:/usr/lib/
to /etc/profile and source /etc/profile everything works as fine as one could wish :)

Temporary fix. Add
export COMPILER_PATH=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/
export LIBRARY_PATH=/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../:/lib/:/usr/lib/
to /etc/profile and then source /etc/profile.
Everything works as fine as one could wish :)

Related

Bulid opencv4 on Jetson Xavier nx but failed

I'm trying to implentment openpilot on Jetson Xavier nx. So I'm following https://github.com/eFiniLan/xnxpilot instruction to install dependence.
But when I'm installing opencv4, I get the following errors in ".../opencv/build/CMakeFiles/CMakeError.log"
CMakeFiles/cmTC_ee78d.dir/CheckIncludeFile.c.o -c /home/tshu/opencv/build/CMakeFiles/CMakeTmp/CheckIncludeFile.c
/home/tshu/opencv/build/CMakeFiles/CMakeTmp/CheckIncludeFile.c:1:10: fatal error: sys/videoio.h: No such file or directory
#include <sys/videoio.h>
^~~~~~~~~~~~~~~
compilation terminated.
CMakeFiles/cmTC_ee78d.dir/build.make:65: recipe for target ‘CMakeFiles/cmTC_ee78d.dir/CheckIncludeFile.c.o’ failed
make[1]: * [CMakeFiles/cmTC_ee78d.dir/CheckIncludeFile.c.o] Error 1
make[1]: Leaving directory ‘/home/tshu/opencv/build/CMakeFiles/CMakeTmp’
Makefile:126: recipe for target ‘cmTC_ee78d/fast’ failed
make: * [cmTC_ee78d/fast] Error 2
The build command I used is
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D WITH_CUDA=ON \
-D CUDA_ARCH_PTX="" \
-D CUDA_ARCH_BIN="7.2" \
-D WITH_CUDNN=ON \
-D CUDNN_VERSION="8.0" \
-D BUILD_opencv_python3=ON \
-D BUILD_opencv_python2=OFF \
-D BUILD_opencv_java=OFF \
-D WITH_GSTREAMER=ON \
-D WITH_GTK=OFF \
-D BUILD_TESTS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D BUILD_EXAMPLES=OFF \
-D BUILD_FFMPEG=ON \
-D OPENCV_DNN_CUDA=ON \
-D ENABLE_FAST_MATH=ON \
-D CUDA_FAST_MATH=ON \
-D WITH_QT=ON \
-D ENABLE_NEON=ON \
-D ENABLE_VFPV3=ON \
-D BUILD_TESTS=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D PYTHON_EXECUTABLE=/home/`whoami`/.pyenv/versions/3.8.5/bin/python \
-D PYTHON_DEFAULT_EXECUTABLE=/home/`whoami`/.pyenv/versions/3.8.5/bin/python \
-D PYTHON_PACKAGES_PATH=/home/`whoami`/.pyenv/versions/3.8.5/lib/python3.8/site-packages/ \
-D OPENCV_EXTRA_MODULES_PATH=/home/`whoami`/opencv_contrib/modules ..
The version of opencv I tried to install is opencv-4.5.2
Can someone give me some advices? Thank you.
Jetpack comes with opencv preinstalled. JetPack 4.4 includes OpenCV 4.1.1. JetPack 4.6 includes OpenCV 4.1.1.
Let me look at the link you sent and I get back to you. You may need to install and compile OpenCV 4.5.2 from source. I wrote some instructions a while a go.
What Jetpack version are you using? OPENCV 4.4, is CUDA GPU accelerated. Using version 4.4 or higher to fully use the Super Resolution function provided by OpenCV. If you are using Jetpack 4.4, you will need to delete the OpenCV 4.1.1 version of JetPack 4.4 and install 4.4 newly.
Try this script file:
#!/bin/bash
#
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
#
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <Install Folder>"
exit
fi
folder="$1"
user="nvidia"
passwd="nvidia"
echo "** Remove OpenCV4.1 first"
sudo apt-get purge *libopencv*
echo "** Install requirement"
sudo apt-get update
sudo apt-get install -y build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install -y libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
sudo apt-get install -y python2.7-dev python3.6-dev python-dev python-numpy python3-numpy
sudo apt-get install -y libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
sudo apt-get install -y libv4l-dev v4l-utils qv4l2 v4l2ucp
sudo apt-get install -y curl
sudo apt-get update
echo "** Download opencv-4.5.1"
cd $folder
curl -L https://github.com/opencv/opencv/archive/4.5.1.zip -o opencv-4.5.1.zip
curl -L https://github.com/opencv/opencv_contrib/archive/4.5.1.zip -o opencv_contrib-4.5.1.zip
unzip opencv-4.5.1.zip
unzip opencv_contrib-4.5.1.zip
cd opencv-4.5.1/
echo "** Building..."
mkdir release
cd release/
cmake -D WITH_CUDA=ON -D ENABLE_PRECOMPILED_HEADERS=OFF -D CUDA_ARCH_BIN="7.2" -D CUDA_ARCH_PTX="" -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.4.0/modules -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_opencv_python2=ON -D BUILD_opencv_python3=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ..
make -j6
sudo make install
echo "** Install opencv-4.5.1 successfully"
echo "** Bye :)"
If you are using NX -D CUDA_ARCH_BIN="7.2"
Run following script with on path:
$./opencv4.5_xavier_nx.sh /home/TH-Dev/src/

Installing the Rcpp package in Docker leads to a freeze during the installation

I am installing an R Shiny app, but I am not able to run the installation anymore.
This is my Dockerfile
FROM openanalytics/r-base
# system libraries of general use
RUN apt-get update && apt-get install -y \
sudo \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev \
libxt-dev \
libssl-dev \
libssh2-1-dev \
libssl1.0.0
# system library dependency for the app
RUN apt-get update && apt-get install -y \
libxml2-dev
RUN R -e "install.packages(c('data.table','janitor','snakecase'), repos='https://cloud.r-project.org/')"
RUN R -e "install.packages('https://cran.r-project.org/src/contrib/Archive/dplyr/dplyr_0.8.2.tar.gz', repos=NULL, type='source')"
RUN R -e "install.packages('https://cran.r-project.org/src/contrib/Archive/shiny/shiny_1.3.0.tar.gz', repos=NULL, type='source')"
# copy the app to the image
RUN mkdir /root/corona
COPY app /root/corona
COPY Rprofile.site /usr/lib/R/etc/
EXPOSE 3838
CMD ["R", "-e shiny::runApp('/root/corona', options = list(port = '3838'))"]
Building the image just freezes, always on this line:
* installing *source* package ‘R6’ ...
** package ‘R6’ successfully unpacked and MD5 sums checked
** using staged installation
** R
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (R6)
* installing *source* package ‘Rcpp’ ...
** package ‘Rcpp’ successfully unpacked and MD5 sums checked
** using staged installation
** libs
g++ -std=gnu++11 -I"/usr/share/R/include" -DNDEBUG -I../inst/include/ -fpic -g -O2 -fdebug-prefix-map=/build/r-base-ttHamR/r-base-4.0.2=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g -c api.cpp -o api.o
g++ -std=gnu++11 -I"/usr/share/R/include" -DNDEBUG -I../inst/include/ -fpic -g -O2 -fdebug-prefix-map=/build/r-base-ttHamR/r-base-4.0.2=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g -c attributes.cpp -o attributes.o
Anyone who similar issue and can tell me why this happens?
I tried to install the package from source, tried another version, but it is always the same. It this a Docker related or package related problem?
Also tried to install it from there: install.packages("Rcpp", repos="https://rcppcore.github.io/drat")
If the compilation really fails, you may have too little RAM. I most often just commit my Dockerfiles and let hub.docker.com build them, but I also frequently test new ones or variations locally and they build just fine. In case you are on an underpowered cloud instance: Rcpp is C++ and does require a bit of RAM from the compiler. So don't try the cheapest 1 core, 512 mb RAM options.
But you also have other options. As this is a system with apt, just install more of the CRAN packages as pre-made binaries: apt-get install r-cran-rcpp r-cran-data.table and so on.

How to install RabbitMQ on Docker?

I'm attempting to install RabbitMQ inside a Docker container using an Ubuntu 18.04 image for running unittests against it.
To install, I'm running the normal sudo apt-get install rabbitmq-server, and it appears to install fine, but when I attempt to start or communicate with the service, I get the error:
Error: unable to connect to node rabbit#b562da1810ce: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#b562da1810ce]
rabbit#b562da1810ce:
* connected to epmd (port 4369) on b562da1810ce
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* Authentication failed (rejected by the remote node), please check the Erlang cookie
current node details:
- node name: 'rabbitmq-cli-69#b562da1810ce'
- home dir: /var/lib/rabbitmq
- cookie hash: YUZIPS6zyhfUBX5afdKGcw==
Researching the "please check the Erlang cookie" text gets me a ton of similar questions, none of which seem to apply to Docker or my situation.
I've tried deleting the ~/.erlang.cookie then restarting the service, and completely purging the package and reinstalling. Nothing's worked.
How do I run RabbitMQ inside Docker?
Edit: This is my install procedure.
root#b562da1810ce:$ sudo apt-get purge -yq rabbitmq-server
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
erlang-asn1 erlang-base erlang-corba erlang-crypto erlang-diameter erlang-edoc erlang-eldap erlang-erl-docgen erlang-eunit erlang-ic erlang-inets erlang-mnesia erlang-nox erlang-odbc erlang-os-mon erlang-parsetools erlang-public-key erlang-runtime-tools erlang-snmp erlang-ssh
erlang-ssl erlang-syntax-tools erlang-tools erlang-xmerl libodbc1
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
rabbitmq-server*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 5,678 kB disk space will be freed.
(Reading database ... 69832 files and directories currently installed.)
Removing rabbitmq-server (3.6.10-1) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of stop.
(Reading database ... 69618 files and directories currently installed.)
Purging configuration files for rabbitmq-server (3.6.10-1) ...
Processing triggers for systemd (237-3ubuntu10.33) ...
root#b562da1810ce:$ rm -Rf /var/log/rabbitmq/*
root#b562da1810ce:$ sudo apt-get install -yq rabbitmq-server
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
rabbitmq-server
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,625 kB of archives.
After this operation, 5,678 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 rabbitmq-server all 3.6.10-1 [4,625 kB]
Fetched 4,625 kB in 4s (1,070 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package rabbitmq-server.
(Reading database ... 69613 files and directories currently installed.)
Preparing to unpack .../rabbitmq-server_3.6.10-1_all.deb ...
Unpacking rabbitmq-server (3.6.10-1) ...
Setting up rabbitmq-server (3.6.10-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service → /lib/systemd/system/rabbitmq-server.service.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Processing triggers for systemd (237-3ubuntu10.33) ...
root#b562da1810ce:$ sudo service rabbitmq-server status
Status of node rabbit#b562da1810ce
Error: unable to connect to node rabbit#b562da1810ce: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#b562da1810ce]
rabbit#b562da1810ce:
* connected to epmd (port 4369) on b562da1810ce
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* Authentication failed (rejected by the remote node), please check the Erlang cookie
current node details:
- node name: 'rabbitmq-cli-30#b562da1810ce'
- home dir: /var/lib/rabbitmq
- cookie hash: DHe9O00f7sIHn/dTThKVVQ==
root#b562da1810ce:$ sudo service rabbitmq-server start
* Starting RabbitMQ Messaging Server rabbitmq-server * FAILED - check /var/log/rabbitmq/startup_\{log, _err\}
[fail]
root#b562da1810ce:$ sudo service rabbitmq-server status
Status of node rabbit#b562da1810ce
Error: unable to connect to node rabbit#b562da1810ce: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#b562da1810ce]
rabbit#b562da1810ce:
* connected to epmd (port 4369) on b562da1810ce
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* Authentication failed (rejected by the remote node), please check the Erlang cookie
current node details:
- node name: 'rabbitmq-cli-13#b562da1810ce'
- home dir: /var/lib/rabbitmq
- cookie hash: DHe9O00f7sIHn/dTThKVVQ==
root#b562da1810ce:$ cat /var/log/rabbitmq/startup_err
root#b562da1810ce:$ cat /var/log/rabbitmq/startup_log
ERROR: node with name "rabbit" already running on "b562da1810ce"
Based on the last line from the log, I decided to check ps aux|grep -i rabbit, which shows Rabbit is running. Yet neither service nor rabbitmqctl is able to communicate with it. Why is this?
Either use official docker image from https://hub.docker.com//rabbitmq or yo can use the Dockerfile from https://hub.docker.com//rabbitmq
# Alpine Linux is not officially supported by the RabbitMQ team -- use at your own risk!
FROM alpine:3.10
RUN apk add --no-cache \
# grab su-exec for easy step-down from root
'su-exec>=0.2' \
# bash for docker-entrypoint.sh
bash \
# "ps" for "rabbitmqctl wait" (https://github.com/docker-library/rabbitmq/issues/162)
procps
# Default to a PGP keyserver that pgp-happy-eyeballs recognizes, but allow for substitutions locally
ARG PGP_KEYSERVER=ha.pool.sks-keyservers.net
# If you are building this image locally and are getting `gpg: keyserver receive failed: No data` errors,
# run the build with a different PGP_KEYSERVER, e.g. docker build --tag rabbitmq:3.7 --build-arg PGP_KEYSERVER=pgpkeys.eu 3.7/ubuntu
# For context, see https://github.com/docker-library/official-images/issues/4252
# Using the latest OpenSSL LTS release, with support until September 2023 - https://www.openssl.org/source/
ENV OPENSSL_VERSION 1.1.1d
ENV OPENSSL_SOURCE_SHA256="1e3a91bc1f9dfce01af26026f856e064eab4c8ee0a8f457b5ae30b40b8b711f2"
# https://www.openssl.org/community/omc.html
ENV OPENSSL_PGP_KEY_IDS="0x8657ABB260F056B1E5190839D9C4D26D0E604491 0x5B2545DAB21995F4088CEFAA36CEE4DEB00CFE33 0xED230BEC4D4F2518B9D7DF41F0DB4D21C1D35231 0xC1F33DD8CE1D4CC613AF14DA9195C48241FBF7DD 0x7953AC1FBC3DC8B3B292393ED5E9E43F7DF9EE8C 0xE5E52560DD91C556DDBDA5D02064C53641C25E5D"
# Use the latest stable Erlang/OTP release (https://github.com/erlang/otp/tags)
ENV OTP_VERSION 22.1.8
# TODO add PGP checking when the feature will be added to Erlang/OTP's build system
# http://erlang.org/pipermail/erlang-questions/2019-January/097067.html
ENV OTP_SOURCE_SHA256="7302be70cee2c33689bf2c2a3e7cfee597415d0fb3e4e71bd3e86bd1eff9cfdc"
# Install dependencies required to build Erlang/OTP from source
# http://erlang.org/doc/installation_guide/INSTALL.html
# autoconf: Required to configure Erlang/OTP before compiling
# dpkg-dev: Required to set up host & build type when compiling Erlang/OTP
# gnupg: Required to verify OpenSSL artefacts
# libncurses5-dev: Required for Erlang/OTP new shell & observer_cli - https://github.com/zhongwencool/observer_cli
RUN set -eux; \
\
apk add --no-cache --virtual .build-deps \
autoconf \
ca-certificates \
dpkg-dev dpkg \
gcc \
gnupg \
libc-dev \
linux-headers \
make \
ncurses-dev \
; \
\
OPENSSL_SOURCE_URL="https://www.openssl.org/source/openssl-$OPENSSL_VERSION.tar.gz"; \
OPENSSL_PATH="/usr/local/src/openssl-$OPENSSL_VERSION"; \
OPENSSL_CONFIG_DIR=/usr/local/etc/ssl; \
\
# /usr/local/src doesn't exist in Alpine by default
mkdir /usr/local/src; \
\
# Required by the crypto & ssl Erlang/OTP applications
wget --output-document "$OPENSSL_PATH.tar.gz.asc" "$OPENSSL_SOURCE_URL.asc"; \
wget --output-document "$OPENSSL_PATH.tar.gz" "$OPENSSL_SOURCE_URL"; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $OPENSSL_PGP_KEY_IDS; do \
gpg --batch --keyserver "$PGP_KEYSERVER" --recv-keys "$key"; \
done; \
gpg --batch --verify "$OPENSSL_PATH.tar.gz.asc" "$OPENSSL_PATH.tar.gz"; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME"; \
echo "$OPENSSL_SOURCE_SHA256 *$OPENSSL_PATH.tar.gz" | sha256sum -c -; \
mkdir -p "$OPENSSL_PATH"; \
tar --extract --file "$OPENSSL_PATH.tar.gz" --directory "$OPENSSL_PATH" --strip-components 1; \
\
# Configure OpenSSL for compilation
cd "$OPENSSL_PATH"; \
# OpenSSL's "config" script uses a lot of "uname"-based target detection...
MACHINE="$(dpkg-architecture --query DEB_BUILD_GNU_CPU)" \
RELEASE="4.x.y-z" \
SYSTEM='Linux' \
BUILD='???' \
./config \
--openssldir="$OPENSSL_CONFIG_DIR" \
# add -rpath to avoid conflicts between our OpenSSL's "libssl.so" and the libssl package by making sure /usr/local/lib is searched first (but only for Erlang/OpenSSL to avoid issues with other tools using libssl; https://github.com/docker-library/rabbitmq/issues/364)
-Wl,-rpath=/usr/local/lib \
; \
# Compile, install OpenSSL, verify that the command-line works & development headers are present
make -j "$(getconf _NPROCESSORS_ONLN)"; \
make install_sw install_ssldirs; \
cd ..; \
rm -rf "$OPENSSL_PATH"*; \
# use Alpine's CA certificates
rmdir "$OPENSSL_CONFIG_DIR/certs" "$OPENSSL_CONFIG_DIR/private"; \
ln -sf /etc/ssl/certs /etc/ssl/private "$OPENSSL_CONFIG_DIR"; \
# smoke test
openssl version; \
\
OTP_SOURCE_URL="https://github.com/erlang/otp/archive/OTP-$OTP_VERSION.tar.gz"; \
OTP_PATH="/usr/local/src/otp-$OTP_VERSION"; \
\
# Download, verify & extract OTP_SOURCE
mkdir -p "$OTP_PATH"; \
wget --output-document "$OTP_PATH.tar.gz" "$OTP_SOURCE_URL"; \
echo "$OTP_SOURCE_SHA256 *$OTP_PATH.tar.gz" | sha256sum -c -; \
tar --extract --file "$OTP_PATH.tar.gz" --directory "$OTP_PATH" --strip-components 1; \
\
# Configure Erlang/OTP for compilation, disable unused features & applications
# http://erlang.org/doc/applications.html
# ERL_TOP is required for Erlang/OTP makefiles to find the absolute path for the installation
cd "$OTP_PATH"; \
export ERL_TOP="$OTP_PATH"; \
./otp_build autoconf; \
export CFLAGS='-g -O2'; \
# add -rpath to avoid conflicts between our OpenSSL's "libssl.so" and the libssl package by making sure /usr/local/lib is searched first (but only for Erlang/OpenSSL to avoid issues with other tools using libssl; https://github.com/docker-library/rabbitmq/issues/364)
export CFLAGS="$CFLAGS -Wl,-rpath=/usr/local/lib"; \
hostArch="$(dpkg-architecture --query DEB_HOST_GNU_TYPE)"; \
buildArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; \
dpkgArch="$(dpkg --print-architecture)"; dpkgArch="${dpkgArch##*-}"; \
./configure \
--host="$hostArch" \
--build="$buildArch" \
--disable-dynamic-ssl-lib \
--disable-hipe \
--disable-sctp \
--disable-silent-rules \
--enable-clock-gettime \
--enable-hybrid-heap \
--enable-kernel-poll \
--enable-shared-zlib \
--enable-smp-support \
--enable-threads \
--with-microstate-accounting=extra \
--without-common_test \
--without-debugger \
--without-dialyzer \
--without-diameter \
--without-edoc \
--without-erl_docgen \
--without-erl_interface \
--without-et \
--without-eunit \
--without-ftp \
--without-hipe \
--without-jinterface \
--without-megaco \
--without-observer \
--without-odbc \
--without-reltool \
--without-ssh \
--without-tftp \
--without-wx \
; \
# Compile & install Erlang/OTP
make -j "$(getconf _NPROCESSORS_ONLN)" GEN_OPT_FLGS="-O2 -fno-strict-aliasing"; \
make install; \
cd ..; \
rm -rf \
"$OTP_PATH"* \
/usr/local/lib/erlang/lib/*/examples \
/usr/local/lib/erlang/lib/*/src \
; \
\
runDeps="$( \
scanelf --needed --nobanner --format '%n#p' --recursive /usr/local \
| tr ',' '\n' \
| sort -u \
| awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' \
)"; \
apk add --no-cache --virtual .otp-run-deps $runDeps; \
apk del --no-network .build-deps; \
\
# Check that OpenSSL still works after purging build dependencies
openssl version; \
# Check that Erlang/OTP crypto & ssl were compiled against OpenSSL correctly
erl -noshell -eval 'io:format("~p~n~n~p~n~n", [crypto:supports(), ssl:versions()]), init:stop().'
ENV RABBITMQ_DATA_DIR=/var/lib/rabbitmq
# Create rabbitmq system user & group, fix permissions & allow root user to connect to the RabbitMQ Erlang VM
RUN set -eux; \
addgroup -g 101 -S rabbitmq; \
adduser -u 100 -S -h "$RABBITMQ_DATA_DIR" -G rabbitmq rabbitmq; \
mkdir -p "$RABBITMQ_DATA_DIR" /etc/rabbitmq /tmp/rabbitmq-ssl /var/log/rabbitmq; \
chown -fR rabbitmq:rabbitmq "$RABBITMQ_DATA_DIR" /etc/rabbitmq /tmp/rabbitmq-ssl /var/log/rabbitmq; \
chmod 777 "$RABBITMQ_DATA_DIR" /etc/rabbitmq /tmp/rabbitmq-ssl /var/log/rabbitmq; \
ln -sf "$RABBITMQ_DATA_DIR/.erlang.cookie" /root/.erlang.cookie
# Use the latest stable RabbitMQ release (https://www.rabbitmq.com/download.html)
ENV RABBITMQ_VERSION 3.7.23-rc.1
# https://www.rabbitmq.com/signatures.html#importing-gpg
ENV RABBITMQ_PGP_KEY_ID="0x0A9AF2115F4687BD29803A206B73A36E6026DFCA"
ENV RABBITMQ_HOME=/opt/rabbitmq
# Add RabbitMQ to PATH, send all logs to TTY
ENV PATH=$RABBITMQ_HOME/sbin:$PATH \
RABBITMQ_LOGS=- RABBITMQ_SASL_LOGS=-
# Install RabbitMQ
RUN set -eux; \
\
apk add --no-cache --virtual .build-deps \
ca-certificates \
gnupg \
xz \
; \
\
RABBITMQ_SOURCE_URL="https://github.com/rabbitmq/rabbitmq-server/releases/download/v$RABBITMQ_VERSION/rabbitmq-server-generic-unix-latest-toolchain-$RABBITMQ_VERSION.tar.xz"; \
RABBITMQ_PATH="/usr/local/src/rabbitmq-$RABBITMQ_VERSION"; \
\
wget --output-document "$RABBITMQ_PATH.tar.xz.asc" "$RABBITMQ_SOURCE_URL.asc"; \
wget --output-document "$RABBITMQ_PATH.tar.xz" "$RABBITMQ_SOURCE_URL"; \
\
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$RABBITMQ_PGP_KEY_ID"; \
gpg --batch --verify "$RABBITMQ_PATH.tar.xz.asc" "$RABBITMQ_PATH.tar.xz"; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME"; \
\
mkdir -p "$RABBITMQ_HOME"; \
tar --extract --file "$RABBITMQ_PATH.tar.xz" --directory "$RABBITMQ_HOME" --strip-components 1; \
rm -rf "$RABBITMQ_PATH"*; \
# Do not default SYS_PREFIX to RABBITMQ_HOME, leave it empty
grep -qE '^SYS_PREFIX=\$\{RABBITMQ_HOME\}$' "$RABBITMQ_HOME/sbin/rabbitmq-defaults"; \
sed -i 's/^SYS_PREFIX=.*$/SYS_PREFIX=/' "$RABBITMQ_HOME/sbin/rabbitmq-defaults"; \
grep -qE '^SYS_PREFIX=$' "$RABBITMQ_HOME/sbin/rabbitmq-defaults"; \
chown -R rabbitmq:rabbitmq "$RABBITMQ_HOME"; \
\
apk del .build-deps; \
\
# verify assumption of no stale cookies
[ ! -e "$RABBITMQ_DATA_DIR/.erlang.cookie" ]; \
# Ensure RabbitMQ was installed correctly by running a few commands that do not depend on a running server, as the rabbitmq user
# If they all succeed, it's safe to assume that things have been set up correctly
su-exec rabbitmq rabbitmqctl help; \
su-exec rabbitmq rabbitmqctl list_ciphers; \
su-exec rabbitmq rabbitmq-plugins list; \
# no stale cookies
rm "$RABBITMQ_DATA_DIR/.erlang.cookie"
# Added for backwards compatibility - users can simply COPY custom plugins to /plugins
RUN ln -sf /opt/rabbitmq/plugins /plugins
# set home so that any `--user` knows where to put the erlang cookie
ENV HOME $RABBITMQ_DATA_DIR
# Hint that the data (a.k.a. home dir) dir should be separate volume
VOLUME $RABBITMQ_DATA_DIR
# warning: the VM is running with native name encoding of latin1 which may cause Elixir to malfunction as it expects utf8. Please ensure your locale is set to UTF-8 (which can be verified by running "locale" in your shell)
# Setting all environment variables that control language preferences, behaviour differs - https://www.gnu.org/software/gettext/manual/html_node/The-LANGUAGE-variable.html#The-LANGUAGE-variable
# https://docs.docker.com/samples/library/ubuntu/#locales
ENV LANG=C.UTF-8 LANGUAGE=C.UTF-8 LC_ALL=C.UTF-8
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 4369 5671 5672 25672
CMD ["rabbitmq-server"]
Use command to build the container (dot is to locate the Dockerfile in current directory).
docker build .
Once the image is built then you can use following command to start the container
docker container start youtImageName

Docker Build SofteWare using Alphine Linux with Error 'install: unrecognized option: strip-program=strip'

I'm building a mosquito docker image, when calling make install meet these error messages 'install: unrecognized option: strip-program=strip', please help, thanks.
install -d /usr/local/lib/
install -s --strip-program=strip libmosquitto.so.1
/usr/local/lib/libmosquitto.so.1
install: unrecognized option: strip-program=strip
BusyBox v1.27.2 (2017-12-12 10:41:50 GMT) multi-call binary.
Usage: install [-cdDsp] [-o USER] [-g GRP] [-m MODE] [-t DIR] [SOURCE]... DEST
Copy files and set attributes
-c Just copy (default)
-d Create directories
-D Create leading target directories
-s Strip symbol table
-p Preserve date
-o USER Set ownership
-g GRP Set group ownership
-m MODE Set permissions
-t DIR Install to DIR
make[1]: *** [Makefile:28: install] Error 1
make[1]: Leaving directory '/usr/local/src/mosquitto-1.4.15/lib'
make: *** [Makefile:38: install] Error 2
Part of My Dockfile:
FROM alpine:3.7
RUN apk add --update --no-cache build-base openssl openssl-dev c-ares-dev util-linux-dev libwebsockets-dev libxslt && \
cd /usr/local && \
mkdir src && \
cd src && \
wget https://mosquitto.org/files/source/mosquitto-1.4.15.tar.gz && \
tar -zxvf mosquitto-1.4.15.tar.gz && \
cd mosquitto-1.4.15 && \
make && make install
Call make the last several result lines:
cc -Wall -ggdb -O2 -c mosquitto_passwd.c -o mosquitto_passwd.o
cc mosquitto_passwd.o -o mosquitto_passwd -lcrypto
make[1]: Leaving directory '/usr/local/src/mosquitto-1.4.15/src'
set -e; for d in man; do make -C ${d}; done
make[1]: Entering directory '/usr/local/src/mosquitto-1.4.15/man'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/usr/local/src/mosquitto-1.4.15/man'
The problem is that you're installing a mosquitto tar.gz with /usr/bin/install version: BusyBox v1.27.2, and your mosquitto's tar.gz downloaded with wget needs /usr/bin/install version from GNU coreutils 8.25 for example, which includes your missing option strip-program.
So, solution is simple: install a mosquitto version for alpine, not for generic Linux:
FROM alpine:3.7
RUN apk add --update --no-cache build-base openssl openssl-dev c-ares-dev util-linux-dev libwebsockets-dev libxslt && \
apk add mosquitto
It'll install version 1.4.15.
EDIT: If you need to install a plugin and compile a generic linux tar.gz, you have to install apk add coreutils
Except for the answer #mulg0r gave me. I found there was another way to solve this. I think this is also useful when someone meets a similar problem. From https://git.alpinelinux.org/cgit/aports/tree/main/mosquitto?h=master this link. the package from alpine Linux. Click the Git repository button, inside that page, this package's build process instructions are there. And some code changes to suit alpine Linux.
For this question, find APKBUILD file from https://git.alpinelinux.org/cgit/aports/tree/main/mosquitto?h=master. this line also solved my question:
sed -i -e "s|(INSTALL) -s|(INSTALL)|g" \
-e 's|--strip-program=${CROSS_COMPILE}${STRIP}||' \
*/Makefile */*/Makefile
Above is just comment out --strip-program when excute make install

Compile FFmpeg with librtmp ERROR: librtmp not found

Environment: Mac OS X 10.9.2, Xcode 5.1.
I have compiled librtmp, libogg and libspeex successfully, they are in the directories named fat-librtmp, fat-libogg and fat-libspeex , then I run the shell script as below to coompile them into FFmpeg:
#!/bin/sh
# OS X Mavericks, Xcode 5.1
set -ex
VERSION="2.2.2"
CURRPATH=`pwd`
DSTDIR="ffmpeg-built"
SCRATCH="scratch"
LIBRTMP=$CURRPATH/librtmp
ARCHS="i386 x86_64 armv7 armv7s arm64"
CONFIGURE_FLAGS="--enable-shared \
--disable-doc \
--disable-stripping \
--disable-ffmpeg \
--disable-ffplay \
--disable-ffserver \
--disable-ffprobe \
--disable-decoders \
--disable-encoders \
--disable-protocols \
--enable-protocol=file \
--enable-protocol=rtmp \
--enable-librtmp \
--enable-encoder=flv \
--enable-decoder=flv \
--disable-symver \
--disable-asm \
--enable-cross-compile"
rm -rf $DSTDIR
mkdir $DSTDIR
if [ ! `which yasm` ]; then
if [ ! `which brew` ]; then
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
fi
brew install yasm
fi
if [ ! `which gas-preprocessor.pl` ]; then
curl -3L https://github.com/libav/gas-preprocessor/raw/master/gas-preprocessor.pl -o /usr/local/bin/gas-preprocessor.pl
chmod +x /usr/local/bin/gas-preprocessor.pl
fi
if [ ! -e ffmpeg-$VERSION.tar.bz2 ]; then
curl -O http://www.ffmpeg.org/releases/ffmpeg-$VERSION.tar.bz2
fi
tar jxf ffmpeg-$VERSION.tar.bz2
export PKG_CONFIG_PATH="/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH"
export LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH"
for ARCH in $ARCHS; do
mkdir -p $DSTDIR/$SCRATCH/$ARCH
cd $DSTDIR/$SCRATCH/$ARCH
CFLAGS="-arch $ARCH"
if [ $ARCH == "i386" -o $ARCH == "x86_64" ]; then
PLATFORM="iPhoneSimulator"
CFLAGS="$CFLAGS -mios-simulator-version-min=6.0"
else
PLATFORM="iPhoneOS"
CFLAGS="$CFLAGS -mios-version-min=6.0"
if [ $ARCH == "arm64" ]; then
EXPORT="GASPP_FIX_XCODE5=1"
fi
fi
XCRUN_SDK=`echo $PLATFORM | tr '[:upper:]' '[:lower:]'`
CC="xcrun -sdk $XCRUN_SDK clang"
CFLAGS="$CFLAGS -I$LIBRTMP/include"
CXXFLAGS="$CFLAGS"
LDFLAGS="$CFLAGS -L$LIBRTMP/lib"
$CURRPATH/ffmpeg-$VERSION/configure \
--target-os=darwin \
--arch=$ARCH \
--cc="$CC" \
$CONFIGURE_FLAGS \
--extra-cflags="$CFLAGS" \
--extra-cxxflags="$CXXFLAGS" \
--extra-ldflags="$LDFLAGS" \
--prefix=$CURRPATH/$DSTDIR/$ARCH
make -j3 install $EXPORT
cd $CURRPATH
done
rm -rf $DSTDIR/$SCRATCH
mkdir -p $DSTDIR/lib
cd $DSTDIR/$ARCH/lib
LIBS=`ls *.a`
cd $CURRPATH
for LIB in $LIBS; do
lipo -create `find $DSTDIR -name $LIB` -output $DSTDIR/lib/$LIB
done
cp -rf $DSTDIR/$ARCH/include $DSTDIR
for ARCH in $ARCHS; do
rm -rf $DSTDIR/$ARCH
done
Unluckily, the config.log shows:
check_pkg_config librtmp librtmp/rtmp.h RTMP_Socket
pkg-config --exists --print-errors librtmp
Package librtmp was not found in the pkg-config search path.
Perhaps you should add the directory containing `librtmp.pc'
to the PKG_CONFIG_PATH environment variable
No package 'librtmp' found
ERROR: librtmp not found
I have googled and knew that configure contains a line enabled librtmp && require_pkg_config librtmp librtmp/rtmp.h RTMP_Socket, which maybe be wrong. Right? Can somebody help me to solve it?
UPDATE at 2014/06/10
I think it's about pkgconfig or something, so I have create a file named librtmp.pc at /usr/local/lib/pkgconfig, which contains below text:
prefix=/usr/local/librtmp
exec_prefix=${prefix}
libdir=${prefix}/lib
includedir=${prefix}/include
Name: librtmp
Description: RTMP implementation
Version: v2.3
Requires:
URL: http://rtmpdump.mplayerhq.hu
Libs: -L${libdir} -lrtmp -lz
Cflags: -I${includedir}
Also I have moved built librtmp to /usr/local. After above being done, I run the shell script again, but still same error! Can somebody told me why and how to solve it?
The below links give some insight while building ffmpeg.
https://trac.ffmpeg.org/wiki/CompilationGuide/Generic
Check whether LD_LIBRARY_PATH environment variable is set properly.

Resources