All Ubuntu wiley repositories are added to my Dockerfile, namely main, universe, etc. and are present in my docker image. However, apt-get install in the following Dockerfile is not able to locate any ZIP/UnZIP packages. Error log in the end.
How can I install these common zip packages? At least p7zip-full and rar.
Dockerfile
FROM ubuntu:15.10
CMD ["bash"]
RUN add-apt-repository main && \
add-apt-repository universe && \
add-apt-repository restricted && \
add-apt-repository multiverse
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get dist-upgrade -y && \
apt-get -y autoremove && \
apt-get clean
RUN apt-get install p7zip \
p7zip-full \
p7zip-rar \
unace \
unrar \
zip \
unzip \
xz-utils \
sharutils \
rar \
uudeview \
mpack \
arj \
cabextract \
file-roller \
&& rm -rf /var/lib/apt/lists/*
ERROR THROWN
E: Unable to locate package p7zip-full
E: Unable to locate package unace
E: Unable to locate package unrar
E: Unable to locate package zip
E: Unable to locate package unzip
E: Unable to locate package sharutils
E: Unable to locate package rar
E: Unable to locate package uudeview
E: Unable to locate package mpack
E: Unable to locate package arj
E: Unable to locate package cabextract
E: Unable to locate package file-roller
Tried with this Dockerfile (your Dockerfile without what I told you in my previous comment):
FROM ubuntu:15.10
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get dist-upgrade -y && \
apt-get -y autoremove && \
apt-get clean
RUN apt-get install -y p7zip \
p7zip-full \
unace \
zip \
unzip \
xz-utils \
sharutils \
uudeview \
mpack \
arj \
cabextract \
file-roller \
&& rm -rf /var/lib/apt/lists/*
CMD ["bash"]
It works and it installs zip and p7zip
$ docker build -t mytest .
$ docker run -d -ti --name mytest mytest /bin/bash
$ docker exec -ti mytest /bin/bash
root#f01fc3456a2a:/# zip
root#f01fc3456a2a:/# p7zip
According to Docker best practices
#gile's answer could be improved by:
using apt-get update and install in a single layer
avoiding apt-get
upgrade
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache
Docker sees the initial and modified instructions as identical and
reuses the cache from previous steps. As a result the apt-get update
is not executed because the build uses the cached version. Because the
apt-get update is not run, your build can potentially get an outdated
version of the curl and nginx packages.
Using RUN apt-get update && apt-get install -y ensures your Dockerfile
installs the latest package versions with no further coding or manual
intervention. This technique is known as “cache busting”. You can also
achieve cache-busting by specifying a package version. This is known
as version pinning
Avoid RUN apt-get upgrade and dist-upgrade, as many of the “essential”
packages from the parent images cannot upgrade inside an unprivileged
container. If a package contained in the parent image is out-of-date,
contact its maintainers. If you know there is a particular package,
foo, that needs to be updated, use apt-get install -y foo to update
automatically.
This should be the same as #gile's answer with those best practices applied
FROM ubuntu:15.10
RUN apt-get -y update \
&& apt-get -y autoremove \
&& apt-get clean \
&& apt-get install -y p7zip \
p7zip-full \
unace \
zip \
unzip \
xz-utils \
sharutils \
uudeview \
mpack \
arj \
cabextract \
file-roller \
&& rm -rf /var/lib/apt/lists/*
CMD ["bash"]
*edit
The Docker best practices documentation has been re-arranged.
The advice remains the same. While the part of documentation that the above link anchors to now merely alludes to concerns between build cache and apt-get...
They have added a new section of documentation dedicated to this topic.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#apt-get
in short:
Always combine RUN apt-get update with apt-get install in the same RUN statement
Related
I'm trying to install chrome in a docker container. I execute:
RUN apt-get install -y wget
RUN wget -q https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb # problem here
RUN apt -f install -y
The problem is that dpkg -i fails because of missing dependencies. In principle this is not a big problem, as the next command should fix this, and indeed it does it when run interactively from within the container. But the problem is that when building a docker container this error makes the build process to stop:
dpkg: error processing package google-chrome-stable (--install):
dependency problems - leaving unconfigured
Errors were encountered while processing:
google-chrome-stable
root#78b45ab9aa33:/#
exit
How can I overcome this problem? Isn't there a simpler way to install chrome without provoking the dependence problem? I can't find the repository to add so I can run a regular apg-get install google-chrome, that is what I'd like to do. In the google linux repository they just mention that the "the packages will automatically configure the repository settings necessary". Which is not exactly what I get...
After the comment by #Facty and some more search, I found two solutions to install Google Chrome without raising this error. I'll post it below for future references or people having the same issue.
There are actually two ways to install Chrome on a docker container:
If you download the deb file manually, you can install it with apt-get instead of dpkg. This will automatically install the dependencies without having to call apt -f install -y later :
RUN apt-get install -y wget
RUN wget -q https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt-get install ./google-chrome-stable_current_amd64.deb
The other solution is to add the repositories (installing the gpg key) and install from them directly, skipping the manual download:
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
RUN apt-get update && apt-get -y install google-chrome-stable
Here an example for Node versions (debian based)
Dockerfile
FROM node:16.16.0 as base
# Chrome dependency Instalation
RUN apt-get update && apt-get install -y \
fonts-liberation \
libasound2 \
libatk-bridge2.0-0 \
libatk1.0-0 \
libatspi2.0-0 \
libcups2 \
libdbus-1-3 \
libdrm2 \
libgbm1 \
libgtk-3-0 \
# libgtk-4-1 \
libnspr4 \
libnss3 \
libwayland-client0 \
libxcomposite1 \
libxdamage1 \
libxfixes3 \
libxkbcommon0 \
libxrandr2 \
xdg-utils \
libu2f-udev \
libvulkan1
# Chrome instalation
RUN curl -LO https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt-get install -y ./google-chrome-stable_current_amd64.deb
RUN rm google-chrome-stable_current_amd64.deb
# Check chrome version
RUN echo "Chrome: " && google-chrome --version
If you're using it in Python, to run selenium. Here is what solved my problem.
RUN apt -f install -y
RUN apt-get install -y wget
RUN wget -q https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt-get install ./google-chrome-stable_current_amd64.deb -y
Sometimes, using wget doesn't solve the problem. Due to lack of support. So you can use apt -f install -y
The only mistake #Pythonist had was the disorder of commands.
I'm buildinga a docker image using a Dockerfile to build it. I have put ARG DEBIAN_FRONTEND=noninteractive in the beginning of the Dockerfile to avoid debconf warnings while building.
The warnings does not show up when using apt-get install inside the Dockerfile. However when executing a sh script (install_dependencies.sh) from the Dockerfile that contains apt-get install commands, the warnings show up again. I also tried to set DEBIAN_FRONTEND=noninteractive inside the sh script itself.
I can solve this by adding echo 'debconf debconf/frontend select Noninteractive' | sudo debconf-set-selections in the sh script before the apt-get install commands but I would want to avoid that, since any fail in the script would leave debconf select to Noninteractive.
Dockerfile:
FROM ubuntu:18.04
# Avoid warnings by switching to noninteractive
ARG DEBIAN_FRONTEND=noninteractive
WORKDIR /tmp
# Configure APT --> HERE THE WARNINGS 'debconf: unable to initialize frontend: Dialog' ARE NOT DISPLAYED
RUN apt-get update \
&& apt-get -y upgrade \
&& apt-get install -y \
apt-utils \
dialog \
fakeroot \
software-properties-common \
2>&1
# Install APT packages --> HERE THE WARNINGS 'debconf: unable to initialize frontend: Dialog' ARE NOT DISPLAYED
RUN apt-get update && apt-get install -y \
#
# System packages
iproute2 \
procps \
lsb-release \
sudo \
unattended-upgrades \
dnsutils \
iputils-ping \
xauth \
openssl \
tar \
zip \
#
# Helpers
&& apt-get install -y \
ca-certificates \
curl \
wget \
lsof \
gconf2 \
gconf-service \
#
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
# Install LTE stack dependencies --> HERE THE WARNINGS 'debconf: unable to initialize frontend: Dialog' ARE DISPLAYED
RUN chmod +x install_dependencies.sh \
&& export DEBIAN_FRONTEND=noninteractive; ./install_dependencies.sh
install_dependencies.sh:
#!/bin/sh
export DEBIAN_FRONTEND=noninteractive
APT_PACKAGES="lib32z1 \
python-setuptools \
libmysqlclient-dev \
ninja-build"
install_apt_packages() {
sudo apt-get install -y tzdata \
build-essential \
git
for package in $APT_PACKAGES;
do
sudo apt-get -y install "$package";
done
}
main() {
sudo apt-get update && sudo apt-get upgrade -y
install_apt_packages
}
main
EDIT: Thanks to #arkadiusz-drabczyk for telling me to remove sudo from the apt-get commands, it makes perfect sense what he says, that the environment variables drop before executing the command.
Drop sudo in your script, there is point to use it if you're running as root. This is also the reason that DEBIAN_FRONTEND has no effect - sudo drops your current user's environment for security reasons, you'd have to use with -E option to make it work.
I tried to integrate an application - QCPump - inside an existing Docker, with an other application - QAtrack+. The goal is to use QCPump inside QAtrack+.
The application code seems to be integrated but when I launch it, I have an error :
ImportError: libjpeg.so.8: cannot open shared object file: No such file or directory
The error is raised by the wxPython package.
Okay, so I have to install it. Unfortunately, my Docker linux is Debian 11, and Debian seems to grab this package several years ago. So, after some reseach, I found that this package is "replaced" - for Debian - by libjpeg-dev. So, I did it. And same result ...
I found the code of the librairy (wxPython) and a docker part has done for Debian 10 : https://github.com/wxWidgets/Phoenix/blob/master/docker/build/debian-10/Dockerfile
I took this part and integrated it in my DockerFile :
RUN apt-get install -y \
freeglut3 \
freeglut3-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libgstreamer-plugins-base1.0-dev \
libgtk-3-dev \
libjpeg-dev \
libnotify-dev \
libsdl2-dev \
libsm-dev \
libtiff-dev \
libwebkit2gtk-4.0-dev \
libxtst-dev; \
apt-get clean;
But same ...
In some forum, people mentionned the LD have to be update. I tried this way but I am not pretty sure :
RUN export LD_LIBRARY_PATH=/usr/local/lib
And to be honest, I am not sure this is the problem and so the solution here...
Any idea about this problem ?
Following, my complete DockerFile if you need it ;)
FROM python:3.6
RUN echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' > /etc/apt/sources.list.d/pgdg.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get update && apt-get install -y \
cron postgresql-client-10 cifs-utils dos2unix \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get install tzdata
ENV TZ 'Europe/Paris'
RUN dpkg-reconfigure -f noninteractive tzdata
RUN touch /root/.is_inside_docker
RUN pip install virtualenv
RUN date "+%H:%M:%S %d/%m/%y"
RUN apt-get -q update && \
apt-get install -yq chromium && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update -y && apt-get install -y libsdl2-ttf-2.0-0 && \
apt-get update -y && apt-get install -y libjpeg-dev libaio1 libaio-dev && \
wget -q -O /tmp/libpng12.deb http://mirrors.kernel.org/ubuntu/pool/main/libp/libpng/libpng12- 0_1.2.54-1ubuntu1_amd64.deb \
&& dpkg -i /tmp/libpng12.deb \
&& rm /tmp/libpng12.deb \
&& apt-get install -y \
freeglut3 \
freeglut3-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libgstreamer-plugins-base1.0-dev \
libgtk-3-dev \
libjpeg-dev \
libnotify-dev \
libsdl2-dev \
libsm-dev \
libtiff-dev \
libwebkit2gtk-4.0-dev \
libxtst-dev; \
apt-get clean;
RUN export LD_LIBRARY_PATH=/usr/local/lib
WORKDIR /usr/src/qatrackplus
someone could help me, i'm starting from follow docker file
FROM python:3.6-slim
RUN apt-get update
RUN apt-get install -y apt-utils build-essential gcc
And i would add an openjdk 8
thanks
You can download java tar.gz, unpack it and set environment variable.
Below a sample of implementation in Dockerfile:
FROM python:3.6-slim
RUN apt-get update
RUN apt-get install -y apt-utils build-essential gcc
ENV JAVA_FOLDER java-se-8u41-ri
ENV JVM_ROOT /usr/lib/jvm
ENV JAVA_PKG_NAME openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
ENV JAVA_TAR_GZ_URL https://download.java.net/openjdk/jdk8u41/ri/$JAVA_PKG_NAME
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/* && \
apt-get clean && \
apt-get autoremove && \
echo Downloading $JAVA_TAR_GZ_URL && \
wget -q $JAVA_TAR_GZ_URL && \
tar -xvf $JAVA_PKG_NAME && \
rm $JAVA_PKG_NAME && \
mkdir -p /usr/lib/jvm && \
mv ./$JAVA_FOLDER $JVM_ROOT && \
update-alternatives --install /usr/bin/java java $JVM_ROOT/$JAVA_FOLDER/bin/java 1 && \
update-alternatives --install /usr/bin/javac javac $JVM_ROOT/$JAVA_FOLDER/bin/javac 1 && \
java -version
In cases where you need python and java installed in a same image (i.e. pyspark) I find it easier to extend the openjdk images with python than the other way around for example:
FROM openjdk:8-jdk-slim-buster
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates \
curl \
python3.7 \
python3-pip \
python3.7-dev \
python3-setuptools \
python3-wheel
Build the image: docker build --rm -t so:64051125 .
Note: the version of python3 available through apt on debian:buster-slim is 3.7, if you really need 3.6 could try building it from source.
Up to today (17/02/2021) openjdk-8-jdk is still in debian 9 (Strech) but removed from debian 10 (Buster), so you should find openjdk-8-jdk if you use this python docker image: python:3.6-stretch instead of python:3.6-slim-buster
We developed Windows based application and try to convert it to run as docker in Linux base environment.
Unfortunately, one of 3rd party library can't be convert to Linux environment.
We built docker image which is ubuntu 16.04 + wine 4.0 + winetricks which our application can runs on it but all 3 components (ubuntu, wine + winetrick) weight more than 3GB.
Below is the part of Dockerfile which we use to build the docker image
Our application is 64bit and combines python and C++ code
How can we reduce docker size?
Is there another way to run windows base application as docker container on linux environment?
FROM ubuntu:16.04
# reccomended to add 32bit arch for wine
RUN dpkg --add-architecture i386 \
# install things to help install wine
&& apt-get update \
&& apt-get install -y --allow-unauthenticated wget software-properties-common software-properties-common debconf-utils python-software-properties apt-transport-https cabextract telnet xvfb unzip build-essential \
# register repo and install winehq
&& wget -nc https://dl.winehq.org/wine-builds/Release.key \
&& apt-key add Release.key \
&& wget -nc https://dl.winehq.org/wine-builds/winehq.key \
&& apt-key add winehq.key \
&& apt-add-repository https://dl.winehq.org/wine-builds/ubuntu/ \
&& apt-get update \
&& apt-get install -y xvfb \
&& apt-get install --install-recommends -y --allow-unauthenticated winehq-stable
# setup vars for wine
ENV DISPLAY=":0.0"
ENV WINEARCH="win64"
ENV WINEPREFIX="/root/.wine64"
ENV WINESYSTEM32="/root/.wine64/drive_c/windows/system32"
ENV WINEDLLOVERRIDES="mscoree,mshtml="
ENV WINEDEBUG=-all
COPY scripts /root/scripts
# pull down winetricks, and install requirements
# vcrun2015 and vcrun2010 are Visual Studio C++ Redistributables
RUN set -e \
&& mkdir -p $WINEPREFIX \
&& cd $WINEPREFIX \
&& wget https://raw.githubusercontent.com/Winetricks/winetricks/20190615/src/winetricks \
&& chmod +x winetricks \
&& xvfb-run wine wineboot --init \
&& xvfb-run wineserver -w \
&& xvfb-run sh ./winetricks -q d3dx9 corefonts vcrun2015
RUN set -x \
&& pythonVersions='python3.7' \
&& apt-get update \
&& apt-get install -y --allow-unauthenticated --no-install-recommends software-properties-common \
&& apt-add-repository -y ppa:deadsnakes/ppa \
&& apt-get update \
&& apt-get install -y --allow-unauthenticated --no-install-recommends $pythonVersions \
&& rm -rf /var/lib/apt/lists/* \
...
You install many unneeded packages, you should read carefully
https://www.dajobe.org/blog/2015/04/18/making-debian-docker-images-smaller/
which explains why
You should remove xvfb (I guess you need xvfb during the installation and configuration of wine, but not after) and all the "recommended" packages
Look also at
https://github.com/wagoodman/dive
which is an excellent tool if you need to see how efficient is your docker image
Use also
https://github.com/jwilder/docker-squash
it can save some space.
Good hunt