Set target for OpenBlas in Docker image - docker

I'm creating a docker image with OpenBlas, here's a MWV
FROM ubuntu:22.04
# gfortran
RUN apt-get -qq update && apt-get -qq -y install \
build-essential \
gfortran \
curl
# open blas
RUN curl -L https://github.com/xianyi/OpenBLAS/archive/v0.3.7.tar.gz -o v0.3.7.tar.gz \
&& tar -xvf v0.3.7.tar.gz \
&& cd OpenBLAS-0.3.7 \
&& make -j2 USE_THREAD=0 USE_LOCKING=1 DYNAMIC_ARCH=1 NO_AFFINITY=1 FC=gfortran \
&& make install
when I build it I get
#8 14.58 Makefile:139: *** OpenBLAS: Detecting CPU failed. Please set TARGET explicitly, e.g. make TARGET=your_cpu_target. Please read README for the detail.. Stop.
So far I understand from this post, the idea behind the flags DYNAMIC_ARCH=1 NO_AFFINITY=1 was exactly to avoid optimization for the local architecture. Am I missing something?
Thanks,

Related

How to update dockerfile to update version of popper-utils' pdftotext?

I currently have a project I'm working on where the version of pdftotext from poppler-utils is using the "testing" version (found here https://manpages.debian.org/testing/poppler-utils/pdftotext.1.en.html). Instead, I want to use the version "experimental" by updating the debian image in the dockerfile (trying to avoid conflicts with other items). Is there a simple way to do this or is this not feasible?
As usual, I figured out the solution. I got some data from this post that provided some good insight on the commands. I had to update to the version that would work with my bot base but got it all figured out.
Installing Poppler utils of version 0.82 in docker
Leaving this here in case someone else encounters something similar.
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install wget build-essential cmake libfreetype6-dev
pkg-config libfontconfig-dev libjpeg-dev libopenjp2-7-dev -y
RUN wget https://poppler.freedesktop.org/poppler-data-0.4.9.tar.gz \
&& tar -xf poppler-data-0.4.9.tar.gz \
&& cd poppler-data-0.4.9 \
&& make install \
&& cd .. \
&& wget https://poppler.freedesktop.org/poppler-20.08.0.tar.xz \
&& tar -xf poppler-20.08.0.tar.xz \
&& cd poppler-20.08.0 \
&& mkdir build \
&& cd build \
&& cmake .. \
&& make \
&& make install \
&& ldconfig
CMD tail -f /dev/null

Error trying to install Python inside a Docker container

I am relatively new to docker. I have an application which I want to containerize.
Below is is my docker file:
FROM ubuntu:16.04
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar
# Install Python 3.6.5
RUN wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz \
&& tar -xvf Python-${PYTHON_VERSION}.tar.xz \
&& cd Python-${PYTHON_VERSION} \
&& ./configure \
&& make altinstall \
&& cd / \
&& rm -rf Python-${PYTHON_VERSION}
# Install Google Cloud SDK
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
I am trying to install python3.6.5 version but I am receiving the following error.
020-01-09 17:26:13 (107 KB/s) - 'Python-3.6.5.tar.xz' saved [17049912/17049912]
tar (child): xz: Cannot exec: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
The command '/bin/sh -c wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz && tar -xvf Python-${PYTHON_VERSION}.tar.xz && cd Python-${PYTHON_VERSION} && ./configure && make altinstall && cd / && rm -rf Python-${PYTHON_VERSION}' returned a non-zero code: 2
Decompressing an .xz file requires the xz binary which under ubuntu is provided by the package xz-utils So You have to instal xz-utils on your image prior to decompressing an .xz file.
You can add this to your previous apt-get install run:
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar \
xz-utils
This should fix the following call to tar in the next RUN expression
Instead of trying to install Python, just start with a base image that has Python preinstalled, e.g. python:3.6-buster. This image is based on Debian Buster, which was released in 2019. Since Ubuntu is based on Debian, everything will be pretty similar, and since it's from 2019 (as opposed to Ubuntu 16.04, which is from 2016) you'll get more up-to-date software.
See https://pythonspeed.com/articles/base-image-python-docker-images/ for longer discussion.

Docker OpenGL support without GPU, gl error: linking with uncompiled/unspecialized shader

In order to build up a headless simulation cluster, we're working on containerization of our existing tools. Right now, the accessible server does not have any NVIDIA GPUs.
One problem, that we encounter is, that a specific application uses OpenGL for rendering. With an physical GPU, the simulation tool is running without any problem. To ship around the GPU dependencies, we're using Mesa 3D OpenGL Software Rendering (Gallium), LLVMpipe, and OpenSWR Drivers. For reference, we had a look at https://github.com/jamesbrink/docker-opengl.
The current Dockerfile, which builds mesa 19.0.2 (using gcc-8) from source, looks like this:
# OPENGL SUPPORT ------------------------------------------------------------------------------
# start with plain ubuntu as base image for testing
FROM ubuntu AS builder
# install some needed packages and set gcc-8 as default compiler
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
llvm-7 \
llvm-dev \
autoconf \
automake \
bison \
flex \
gettext \
libtool \
python-dev\
git \
pkgconf \
python-mako \
zlib1g-dev \
x11proto-gl-dev \
libxext-dev \
xcb \
libx11-xcb-dev \
libxcb-dri2-0-dev \
libxcb-xfixes0-dev \
libdrm-dev \
g++ \
make \
xvfb \
x11vnc \
g++-8 && \
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 800 --slave /usr/bin/g++ g++ /usr/bin/g++-8
# get mesa (using 19.0.2 as later versions dont use the configure script)
WORKDIR /mesa
RUN git clone https://gitlab.freedesktop.org/mesa/mesa.git
WORKDIR /mesa/mesa
RUN git checkout mesa-19.0.2
#RUN git checkout mesa-18.2.2
# build and install mesa
RUN libtoolize && \
autoreconf --install && \
./configure \
--enable-glx=gallium-xlib \
--with-gallium-drivers=swrast,swr \
--disable-dri \
--disable-gbm \
--disable-egl \
--enable-gallium-osmesa \
--enable-autotools \
--enable-llvm \
--with-llvm-prefix=/usr/lib/llvm-7/ \
--prefix=/usr/local && \
make -j 4 && \
make install && \
rm -rf /mesa
# SIM -----------------------------------------------------------------------------------------
FROM ubuntu
COPY --from=builder /usr/local /usr/local
# copy all simulation binaries to the image
COPY .....
# update ubuntu and install all sim dependencies
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
xterm \
freeglut3 \
openssh-server \
synaptic \
nfs-common \
mesa-utils \
xfonts-75dpi \
libusb-0.1-4 \
python \
libglu1-mesa \
libqtgui4 \
gedit \
xvfb \
x11vnc \
llvm-7-dev \
expat \
nano && \
dpkg -i /vtdDeb/libpng12-0_1.2.54-1ubuntu1.1_amd64.deb
# set the environment variables (display -> 99 and LIBGL_ALWAYS_SOFTWARE)
ENV DISPLAY=":99" \
GALLIUM_DRIVER="llvmpipe" \
LIBGL_ALWAYS_SOFTWARE="1" \
LP_DEBUG="" \
LP_NO_RAST="false" \
LP_NUM_THREADS="" \
LP_PERF="" \
MESA_VERSION="19.0.2" \
XVFB_WHD="1920x1080x24"
If we now start the container and initialize the xvfb session, all glx examples like glxgears are working. Also the output of glxinfo | grep '^direct rendering:' is yes, so OpenGL is working.
However, if we start our simulation binary (which is provided from some company and cannot be changed now), following error messages are provided:
uniform block ub_lights has no binding.
uniform block ub_lights has no binding.
FRAGMENT glCompileShader "../data/Shaders/roadRendererFrag.glsl" FAILED
FRAGMENT Shader "../data/Shaders/roadRendererFrag.glsl" infolog:
0:277(48): error: unsized array index must be constant
0:344(48): error: unsized array index must be constant
glLinkProgram "RoadRenderingBase_Program" FAILED
Program "RoadRenderingBase_Program" infolog:
error: linking with uncompiled/unspecialized shader
Any idea how to fix that? For us, the error message is kind of vacuous.
Did someone encountered a similar problem?

Install dependencies of PHP extensions

I've started learning Docker and now I'm building my own container with PHP7 and Apache.
I have to enable some PHP extensions, but I would like to know how do you know what packages(dependencies) should be installed before installing the extension.
This is my Dockerfile at the moment:
FROM php:7.0-apache
RUN apt-get update && apt-get install -y libpng-dev
RUN docker-php-ext-install gd
In this case, to enable gd extension, I googled the error returned on building step and I found that it requires the package libpng-dev, but it's annoying to do these steps for every single extension that I want to install.
How do you manage this kind of problem?
The process is indeed annoying and very much something that could be done by a computer. Luckily someone wrote a script to do exactly that: docker php extension installer
Your example can then be written as:
FROM php:7.0-apache
#get the script
ADD https://raw.githubusercontent.com/mlocati/docker-php-extension-installer/master/install-php-extensions /usr/local/bin/
#install the script
RUN chmod uga+x /usr/local/bin/install-php-extensions && sync
#run the script
RUN install-php-extensions gd
Here is what i do, install php and some php extensions and tools. Things that I usual need...
# Add the "PHP 7" ppa
RUN add-apt-repository -y \
ppa:ondrej/php
#Install PHP-CLI 7, some PHP extentions and some useful Tools with apt
RUN apt-get update && apt-get install -y --force-yes \
php7.0-cli \
php7.0-common \
php7.0-curl \
php7.0-json \
php7.0-xml \
php7.0-mbstring \
php7.0-mcrypt \
php7.0-mysql \
php7.0-pgsql \
php7.0-sqlite \
php7.0-sqlite3 \
php7.0-zip \
php7.0-memcached \
php7.0-gd \
php7.0-fpm \
php7.0-xdebug \
php7.1-bcmath \
php7.1-intl \
php7.0-dev \
libcurl4-openssl-dev \
libedit-dev \
libssl-dev \
libxml2-dev \
xz-utils \
sqlite3 \
libsqlite3-dev \
git \
curl \
vim \
nano \
net-tools \
pkg-config \
iputils-ping
# remove load xdebug extension (only load on phpunit command)
RUN sed -i 's/^/;/g' /etc/php/7.0/cli/conf.d/20-xdebug.ini
Creating your own Dockerfiles involves trial and error - or building on and tweaking the work of others.
If you haven't already found this, take a look: https://hub.docker.com/r/chialab/php/
This image appears to have extensions added on top of the official base image. If you don't need all of the extensions in this image, you could look at the source of this image and tweak it to your liking.

Docker container not able to locate Zip packages?

All Ubuntu wiley repositories are added to my Dockerfile, namely main, universe, etc. and are present in my docker image. However, apt-get install in the following Dockerfile is not able to locate any ZIP/UnZIP packages. Error log in the end.
How can I install these common zip packages? At least p7zip-full and rar.
Dockerfile
FROM ubuntu:15.10
CMD ["bash"]
RUN add-apt-repository main && \
add-apt-repository universe && \
add-apt-repository restricted && \
add-apt-repository multiverse
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get dist-upgrade -y && \
apt-get -y autoremove && \
apt-get clean
RUN apt-get install p7zip \
p7zip-full \
p7zip-rar \
unace \
unrar \
zip \
unzip \
xz-utils \
sharutils \
rar \
uudeview \
mpack \
arj \
cabextract \
file-roller \
&& rm -rf /var/lib/apt/lists/*
ERROR THROWN
E: Unable to locate package p7zip-full
E: Unable to locate package unace
E: Unable to locate package unrar
E: Unable to locate package zip
E: Unable to locate package unzip
E: Unable to locate package sharutils
E: Unable to locate package rar
E: Unable to locate package uudeview
E: Unable to locate package mpack
E: Unable to locate package arj
E: Unable to locate package cabextract
E: Unable to locate package file-roller
Tried with this Dockerfile (your Dockerfile without what I told you in my previous comment):
FROM ubuntu:15.10
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get dist-upgrade -y && \
apt-get -y autoremove && \
apt-get clean
RUN apt-get install -y p7zip \
p7zip-full \
unace \
zip \
unzip \
xz-utils \
sharutils \
uudeview \
mpack \
arj \
cabextract \
file-roller \
&& rm -rf /var/lib/apt/lists/*
CMD ["bash"]
It works and it installs zip and p7zip
$ docker build -t mytest .
$ docker run -d -ti --name mytest mytest /bin/bash
$ docker exec -ti mytest /bin/bash
root#f01fc3456a2a:/# zip
root#f01fc3456a2a:/# p7zip
According to Docker best practices
#gile's answer could be improved by:
using apt-get update and install in a single layer
avoiding apt-get
upgrade
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache
Docker sees the initial and modified instructions as identical and
reuses the cache from previous steps. As a result the apt-get update
is not executed because the build uses the cached version. Because the
apt-get update is not run, your build can potentially get an outdated
version of the curl and nginx packages.
Using RUN apt-get update && apt-get install -y ensures your Dockerfile
installs the latest package versions with no further coding or manual
intervention. This technique is known as “cache busting”. You can also
achieve cache-busting by specifying a package version. This is known
as version pinning
Avoid RUN apt-get upgrade and dist-upgrade, as many of the “essential”
packages from the parent images cannot upgrade inside an unprivileged
container. If a package contained in the parent image is out-of-date,
contact its maintainers. If you know there is a particular package,
foo, that needs to be updated, use apt-get install -y foo to update
automatically.
This should be the same as #gile's answer with those best practices applied
FROM ubuntu:15.10
RUN apt-get -y update \
&& apt-get -y autoremove \
&& apt-get clean \
&& apt-get install -y p7zip \
p7zip-full \
unace \
zip \
unzip \
xz-utils \
sharutils \
uudeview \
mpack \
arj \
cabextract \
file-roller \
&& rm -rf /var/lib/apt/lists/*
CMD ["bash"]
*edit
The Docker best practices documentation has been re-arranged.
The advice remains the same. While the part of documentation that the above link anchors to now merely alludes to concerns between build cache and apt-get...
They have added a new section of documentation dedicated to this topic.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#apt-get
in short:
Always combine RUN apt-get update with apt-get install in the same RUN statement

Resources