Is it possible to use the latest Dart unstable in a docker container, if so how to specify it in the Dockerfile?
It's documented here Using apt-get (Setting up for the dev channel)
With a Dockerfile like
FROM google/debian:wheezy
ENV DART_VERSION 1.14.0-dev.1.0
RUN \
apt-get -q update && \
DEBIAN_FRONTEND=noninteractive && \
apt-get install --no-install-recommends -y -q \
apt-transport-https \
apt-utils \
apt-show-versions \
ca-certificates \
curl \
git
RUN \
curl https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
curl https://storage.googleapis.com/download.dartlang.org/linux/debian/dart_unstable.list > \
/etc/apt/sources.list.d/dart_unstable.list && \
apt-get update && \
apt-cache policy dart && \
apt-get install dart=$DART_VERSION-1 && \
apt-show-versions dart && \
rm -rf /var/lib/apt/lists/* && \
ln -s /usr/lib/dart /usr/lib/dart/bin/dart-sdk
Dart 1.14.0-dev.1.0 is installed. The line apt-show-versions dart && \ prints the available Dart versions in the build output (just for information purposes - can be removed).
Related
I got strange errors while compiling my project within the docker container when I am using -j option of make.
The errors look like this:
/root/projects/obj/linux_debug/src/myos_make/libmyos.a: error adding symbols: Cannot allocate memory
/usr/protobuf-3.9.2/lib/libprotobuf.a: error adding symbols: Bad address
Compiling the same project inside the container without -j and outside the container with -j passes ok.
The Dockerfile :
#
# This docker must be built from projects/src folder as context
#
FROM ubuntu:18.04 as base
# copy the dockerfile to make it possible to keep track of the cntent of the image
COPY ./docker_build/docker_files/Dockerfile.ubuntu18.04-builder $HOME/.
# Disable Prompt During Packages Installation
ARG DEBIAN_FRONTEND=noninteractive
# Update Ubuntu Software repository
RUN apt update -y && apt upgrade -y
# Install base utils
RUN apt install -y \
nano \
wget \
sudo \
curl \
&& apt clean -y
# Install additional packages needed for build.
RUN apt install -y \
nasm \
pkg-config \
bc \
python \
python3 \
python3-pip \
sshpass \
libapr1 \
libapr1-dev \
&& apt clean -y
RUN apt install -y \
rpm \
libaio-dev \
libnuma-dev \
numactl \
valgrind \
openssl \
libssl-dev \
ldap-utils \
libldap2-dev \
libncurses5-dev \
libncursesw5-dev \
uuid-dev \
ncurses-base \
expat \
libfuse-dev \
cmake \
build-essential \
autotools-dev \
autoconf \
automake \
doxygen \
linux-headers-4.15.0-173-generic \
g++-multilib \
lib32z1-dev \
libasan4 \
software-properties-common \
libtool \
unzip \
xsltproc \
&& apt clean -y
RUN ln -s /usr/bin/doxygen /bin/doxygen
# install GCC compiler
RUN add-apt-repository ppa:ubuntu-toolchain-r/test -y \
&& apt update -y\
&& apt install gcc-10 -y \
&& apt install g++-10 -y \
&& apt clean -y && \
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 60 --slave /usr/bin/g++ g++ /usr/bin/g++-10 && \
update-alternatives --config gcc
RUN curl https://bootstrap.pypa.io/pip/2.7/get-pip.py --output get-pip.py && \
python get-pip.py
RUN ln -sf /usr/local/bin/pip2 /usr/bin/pip
RUN pip install \
pyyaml \
jinja2 \
gcovr
FROM base as env-prepare
COPY external/protobuf /protobuf
RUN cd protobuf && ./build_proto.sh
FROM base as builder
COPY --from=env-prepare /usr/protobuf-3.9.2/ /usr/protobuf-3.9.2/
COPY --from=env-prepare /usr/protobuf-c-1.3.2/ /usr/protobuf-c-1.3.2/
COPY docker_build/docker-image-release /etc/docker-image-release
ARG UID=1000
USER ${UID}
# This entrypoint is to make everything run inside bash, so the devtools script will alway run before the command
ENTRYPOINT [ "/bin/bash", "-c" ]
I am having an issue with installing dependencies with a Dockerfile. The image builds successfully, but when I attempt to test some of the dependencies installed in the Dockerfile below, they do not seem to be found. I am new to creating Dockerfiles so I am not sure if I am missing something. This may having something to do with environment variables? For example, I install sl as a test to confirm that something is not right. When a access a shell of my running image, running 'sl' returns "Command not found". Please let me know what I am doing wrong.
Here is my Dockerfile that I am using to build an image:
#FROM specifies the parent image from which we will construct our own custom image. This tomcat image contains a limited dist of Ubuntu, Tomcat9, and JRE11 ("temurin" is just a limited version of Java)
FROM tomcat:9-jre11-temurin
#add a maintainer label to the image
LABEL maintainer="Kerrick Cavanaugh - kerrickcavanaugh#ufl.edu"
#SciPy, Pandas, Numpy, other deps
RUN apt-get update && \
yes | apt-get -qq -y install \
git \
build-essential \
software-properties-common \
# python-dev \
python3 \
python3-dev \
python-dev \
libssl-dev \
# libffi-dev \
libxml2-dev \
libxslt1-dev \
apt-utils \
zlib1g-dev \
pip \
python3-pip \
sl \
gfortran \
libopenblas-dev \
liblapack-dev \
wget
# add /root/.local/bin to python path (causing warnings)
RUN python3 -c "import sys; sys.path.append('/root/.local/bin');"
RUN pip3 install --upgrade pip && pip3 install -vvv \
wheel \
matplotlib \
ipython \
jupyter \
sympy \
cython \
et-xmlfile==1.0.1 \
imbalanced-learn==0.5.0 \
imblearn==0.0 \
jdcal==1.4.1 \
joblib==0.14.0 \
numpy==1.16.4 \
scipy==1.3.1 \
openpyxl==2.6.4 \
pandas==0.24.2 \
python-dateutil==2.8.0 \
pytest==4.6.3 \
pytest-cov==2.7.1 \
pytz==2019.3 \
scikit-learn==0.21.3 \
six==1.12.0 \
xlrd==1.2.0
#R
RUN apt update && echo 'Y' | apt upgrade && \
#removed sudo from next 4
echo 'Y' | apt install dirmngr gnupg apt-transport-https ca-certificates software-properties-common && \
echo 'Y' | apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9 && \
echo 'Y' | add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu focal-cran40/' && \
echo 'Y' | apt install r-base && \
R --version
#MATLAB???
#FSL
COPY ./aidp_docker/fslinstaller.py /usr/local/tomcat
#printf '36' | <-- for keyboard
#removed sudo
RUN apt-get update && apt-get -qq install python2
RUN { echo; echo '36'; } | python2 /usr/local/tomcat/fslinstaller.py -E -v -q > /dev/null
#AFNI
RUN cd && \
curl -O https://raw.githubusercontent.com/afni/afni/master/src/other_builds/OS_notes.linux_ubuntu_20_64_a_admin.txt && \
curl -O https://raw.githubusercontent.com/afni/afni/master/src/other_builds/OS_notes.linux_ubuntu_20_64_b_user.tcsh && \
curl -O https://raw.githubusercontent.com/afni/afni/master/src/other_builds/OS_notes.linux_ubuntu_20_64_c_nice.tcsh && \
#removed sudo
bash OS_notes.linux_ubuntu_20_64_a_admin.txt 2>&1 | tee o.ubuntu_20_a.txt && \
tcsh OS_notes.linux_ubuntu_20_64_b_user.tcsh 2>&1 | tee o.ubuntu_20_b.txt
#dcm2niix
RUN curl -fLO https://github.com/rordenlab/dcm2niix/releases/latest/download/dcm2niix_lnx.zip
#ANTsR, ANTsRCore, ITKR
RUN git clone https://github.com/stnava/ITKR.git && \
git clone https://github.com/ANTsX/ANTsRCore.git && \
git clone https://github.com/ANTsX/ANTsR.git && \
R -e 'install.packages(c("Rcpp", "RcppEigen", "magrittr"))' && \
R CMD INSTALL ITKR && \
R CMD INSTALL ANTsRCore && \
R CMD INSTALL ANTsR
#COPY the specified folder structure and associated scripts to /usr/local/tomcat
COPY ./aidp_docker/ /usr/local/tomcat
#COPY wAIDP.war to the image
COPY ./wAIDP.war /usr/local/tomcat/webapps
#build complete
RUN echo 'Build complete!'
#runs the Docker image
CMD ["catalina.sh", "run"]
I tried to integrate an application - QCPump - inside an existing Docker, with an other application - QAtrack+. The goal is to use QCPump inside QAtrack+.
The application code seems to be integrated but when I launch it, I have an error :
ImportError: libjpeg.so.8: cannot open shared object file: No such file or directory
The error is raised by the wxPython package.
Okay, so I have to install it. Unfortunately, my Docker linux is Debian 11, and Debian seems to grab this package several years ago. So, after some reseach, I found that this package is "replaced" - for Debian - by libjpeg-dev. So, I did it. And same result ...
I found the code of the librairy (wxPython) and a docker part has done for Debian 10 : https://github.com/wxWidgets/Phoenix/blob/master/docker/build/debian-10/Dockerfile
I took this part and integrated it in my DockerFile :
RUN apt-get install -y \
freeglut3 \
freeglut3-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libgstreamer-plugins-base1.0-dev \
libgtk-3-dev \
libjpeg-dev \
libnotify-dev \
libsdl2-dev \
libsm-dev \
libtiff-dev \
libwebkit2gtk-4.0-dev \
libxtst-dev; \
apt-get clean;
But same ...
In some forum, people mentionned the LD have to be update. I tried this way but I am not pretty sure :
RUN export LD_LIBRARY_PATH=/usr/local/lib
And to be honest, I am not sure this is the problem and so the solution here...
Any idea about this problem ?
Following, my complete DockerFile if you need it ;)
FROM python:3.6
RUN echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' > /etc/apt/sources.list.d/pgdg.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get update && apt-get install -y \
cron postgresql-client-10 cifs-utils dos2unix \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get install tzdata
ENV TZ 'Europe/Paris'
RUN dpkg-reconfigure -f noninteractive tzdata
RUN touch /root/.is_inside_docker
RUN pip install virtualenv
RUN date "+%H:%M:%S %d/%m/%y"
RUN apt-get -q update && \
apt-get install -yq chromium && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update -y && apt-get install -y libsdl2-ttf-2.0-0 && \
apt-get update -y && apt-get install -y libjpeg-dev libaio1 libaio-dev && \
wget -q -O /tmp/libpng12.deb http://mirrors.kernel.org/ubuntu/pool/main/libp/libpng/libpng12- 0_1.2.54-1ubuntu1_amd64.deb \
&& dpkg -i /tmp/libpng12.deb \
&& rm /tmp/libpng12.deb \
&& apt-get install -y \
freeglut3 \
freeglut3-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libgstreamer-plugins-base1.0-dev \
libgtk-3-dev \
libjpeg-dev \
libnotify-dev \
libsdl2-dev \
libsm-dev \
libtiff-dev \
libwebkit2gtk-4.0-dev \
libxtst-dev; \
apt-get clean;
RUN export LD_LIBRARY_PATH=/usr/local/lib
WORKDIR /usr/src/qatrackplus
I'm trying to run headless chrome inside a docker container with the webgl support and the hardware acceleration.
I have a Nvidia graphic card and if I test of the drivers with the command suggested by Nvidia, it is successful
docker run --gpus all nvidia/opengl:base nvidia-smi
This is my dockerfile :
FROM nvidia/opengl:1.0-glvnd-runtime-ubuntu18.04
# Env vars for the nvidia-container-runtime.
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends \
git \
ca-certificates \
build-essential \
g++ \
libxinerama-dev \
libxext-dev \
libxrandr-dev \
libxi-dev \
libxcursor-dev \
libxxf86vm-dev \
libvulkan-dev && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y apt-utils && apt-get install -y curl
RUN apt-get update \
&& apt-get install -y wget gnupg ca-certificates \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
# We install Chrome to get all the OS level dependencies, but Chrome itself
# is not actually used as it's packaged in the node puppeteer library.
# Alternatively, we could could include the entire dep list ourselves
# (https://github.com/puppeteer/puppeteer/blob/master/docs/troubleshooting.md#chrome-headless-doesnt-launch-on-unix)
# but that seems too easy to get out of date.
&& apt-get install -y google-chrome-stable \
&& rm -rf /var/lib/apt/lists/* \
&& wget --quiet https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh -O /usr/sbin/wait-for-it.sh \
&& chmod +x /usr/sbin/wait-for-it.sh
# Install GTK, pulseaudio and fonts
RUN apt-get update && \
apt-get -y --no-install-recommends install ca-certificates tzdata \
libcanberra-gtk-module libexif12 pulseaudio attr \
fonts-dejavu-core fonts-freefont-ttf fonts-guru-extra \
fonts-kacst fonts-kacst-one fonts-khmeros-core fonts-lao \
fonts-liberation fonts-lklug-sinhala fonts-lohit-guru \
fonts-nanum fonts-opensymbol fonts-sil-abyssinica \
fonts-sil-padauk fonts-symbola fonts-takao-pgothic \
fonts-tibetan-machine fonts-tlwg-garuda-ttf \
fonts-tlwg-kinnari-ttf fonts-tlwg-laksaman-ttf \
fonts-tlwg-loma-ttf fonts-tlwg-mono-ttf \
fonts-tlwg-norasi-ttf fonts-tlwg-purisa-ttf \
fonts-tlwg-sawasdee-ttf fonts-tlwg-typewriter-ttf \
fonts-tlwg-typist-ttf fonts-tlwg-typo-ttf \
fonts-tlwg-umpush-ttf fonts-tlwg-waree-ttf \
ttf-bitstream-vera ttf-dejavu-core ttf-ubuntu-font-family \
fonts-arphic-ukai fonts-arphic-uming \
fonts-ipafont-mincho fonts-ipafont-gothic \
fonts-unfonts-core && \
rm -rf -- /var/lib/apt/lists /tmp/*.deb
however when I run the container with :
docker run -it --gpus all mytest
and I try to capture a screenshot inside the container with:
google-chrome --no-sandbox --headless --screenshot=ss.png chrome://gpu/
I get the error : Segmentation fault (core dumped)
Any idea ?
GPU chome headless options are still problematic, especially when You try that in containers. Just update image to current nvidia/opengl:1.2-glvnd-runtime-ubuntu20.04 and You will get output without any memory dump. I had same issues about year ago on some chrome options with vulkan support (now same thing works ok).
I have an app with Docker, and I am trying to install memcached with php7-fpm.
According to official docker documentation I have in my Dockerfile:
# PHP Version
FROM php:7.0-fpm
...
# Install Memcached
RUN apt-get install -y libmemcached-dev && \
pecl install memcached && \
docker-php-ext-enable memcached
But I got this error:
pecl/memcached requires PHP (version >= 5.2.0, version <= 6.0.0, excluded versions: 6.0.0), installed version is 7.0.9
I don't want to switch to PHP 5.6. Any ideas?
We build the memcache extension from scratch when building our php7 container. Maybe our approached helps you or points you to the right direction. The documentation in the Dockerhub really seems to be faulty, tried pecl and it didn't work here either.
So this is how it looks in our Dockerfile:
RUN apt-get update && apt-get install -y
libmemcached11 \
libmemcachedutil2 \
libmemcached-dev \
libz-dev \
git \
&& cd /root \
&& git clone -b php7 https://github.com/php-memcached-dev/php-memcached \
&& cd php-memcached \
&& phpize \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -rf php-memcached \
&& echo extension=memcached.so >> /usr/local/etc/php/conf.d/memcached.ini \
&& apt-get remove -y build-essential libmemcached-dev libz-dev \
&& apt-get remove -y libmemcached-dev libz-dev \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
It seems that the memcached is incompatible with php7 and need another way to install it.
After a quick lock at Laradock repo I solved in this manner, I post the code:
# PHP Version
FROM php:7.0-fpm
# Install the PHP extensions we need
RUN apt-get update && \
apt-get install -y --no-install-recommends \
curl \
libmemcached-dev \
libz-dev \
libpq-dev \
libjpeg-dev \
libpng12-dev \
libfreetype6-dev \
libicu-dev \
libssl-dev \
libmcrypt-dev && \
docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr && \
docker-php-ext-install gd mysqli opcache intl
.....
# Install Memcached
RUN curl -L -o /tmp/memcached.tar.gz "https://github.com/php-memcached- dev/php-memcached/archive/php7.tar.gz" && \
mkdir -p memcached && \
tar -C memcached -zxvf /tmp/memcached.tar.gz --strip 1 && \
( \
cd memcached && \
phpize && \
./configure && \
make -j$(nproc) && \
make install \
) && \
rm -r memcached && \
rm /tmp/memcached.tar.gz && \
docker-php-ext-enable memcached
one more solution
FROM php:7.2-fpm
# ...
# INSTALL memcached
RUN apt-get upgrade -y
RUN apt-get install -y memcached
RUN apt-get install -y libmemcached-dev zlib1g-dev libicu-dev
RUN git clone -b php7 https://github.com/php-memcached-dev/php-memcached
/usr/src/php/ext/memcached \
&& docker-php-ext-configure /usr/src/php/ext/memcached \
--disable-memcached-sasl \
&& docker-php-ext-install /usr/src/php/ext/memcached \
&& rm -rf /usr/src/php/ext/memcached