Dockerfile for multistage image won't work - docker

I am attempting to run a Dockerfile for a multistage image I cloned from github. The Dockerfile reads:
FROM openjdk:9-jdk-slim AS build
COPY certificates /usr/local/share/ca-certificates/certificates
RUN apt-get update && apt-get install --no-install-recommends -y -qq ca-certificates-java && \
apt-update ca-certificates --verbose
FROM openjdk:9-jre-slim
COPY --from=build /etc/ssl/certs/java/cacerts /etc/ssl/certs/java/cacerts
RUN groupadd --gid 1000 java && \
useradd --uid 1000 --gid java --shell /bin/bash --create-home java && \
chmod -R a+w /home/java
WORKDIR /home/java
USER java
When I attempt to run it with the command:
docker image build . -t layers:5
I get the following response:
executor failed running [/bin/sh -c apt-get update && apt-get install --no-install-recommends -y -qq ca-certificates-java && update-ca-certificates --verbose]: exit code: 100
I have tried solving this by removing '-y' and attaching 'apt-' to 'update-ca-certificates' and removing the dash between 'ca' and 'certificates', but none of them have worked. I'm unsure how to tackle this; your help would be most appreciated.

The base image, openjdk:9-jdk-slim, is an older image based on Debian Buster.
The apt-get update is the cause of the issue because of no public key existing.
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 648ACFD622F3D138 NO_PUBKEY 0E98404D386FA1D9
Normally, you'd import the key and be on your way. However, use of the image is not recommended because the Debian version is Debian GNU/Linux buster/sid. The Debian release docs say: "The unstable distribution is always called sid." You'd be better off upgrading to a stable version of Debian like an image built more recently for a newer version of Java.
Another option, that could cause more problems is to copy /etc/apt/trusted.gpg.d from a newer Buster release like buster-20221205-slim and then run your commands.

Related

laravel-with-docker-example project fails to build

I am trying to run below project with docker.
https://github.com/kyleferguson/laravel-with-docker-example
which has the below docker file.
FROM php:7-fpm
RUN apt-get update && apt-get install -y libmcrypt-dev mysql-client \
&& docker-php-ext-install mcrypt pdo_mysql
WORKDIR /var/www
When i run the "docker-compose up" after running the "composer install".
I get below errors.
executor failed running [/bin/sh -c apt-get update && apt-get install
-y libmcrypt-dev mariadb-client && docker-php-ext-install mcrypt pdo_mysql]: exit code: 1 ERROR: Service 'app' failed to build
Any idea on how to fix this?
Note:
I already tried replacing mariadb-client with mysql-client and default-my-client, still the same issue.
You have a couple problems here, which is why switching to mariadb didn't work on its own.
One way to make it more clear what the problem is, is to bash into a container created from your base image and run the commands manually.
docker run -it php:7-fpm bash
From there if you run each install individually you'll see where you are failing:
# apt-get install -y mysql-client
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package mysql-client
Either add a repo that provides mysql, or use mariadb.
# docker-php-ext-install mcrypt
error: /usr/src/php/ext/mcrypt does not exist
mcrypt was removed in php 7.2 so you'll need to use pecl to install it, if you really need it.
Unless you're running a very old version of Laravel, you shouldn't need mcrypt.

Install Java runtime in Debian based docker image

I am trying to install the java runtime in a Debian based docker image (mcr.microsoft.com/dotnet/core/sdk:3.1-buster). According to various howtos this should be possible by running
RUN apt update
RUN apt-get install openjdk-11-jre
The latter command comes back with
E: Unable to locate package openjdk-11-jre
However according to https://packages.debian.org/buster/openjdk-11-jre the package does exist. What am I doing wrong?
Unsure from which image your are pulling. I used slim, Dockerfile.
from debian:buster-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN mkdir -p /usr/share/man/man1 /usr/share/man/man2
RUN apt-get update && \
apt-get install -y --no-install-recommends \
openjdk-11-jre
# Prints installed java version, just for checking
RUN java --version
NOTE: If you don't run the mkdir -p /usr/share/man/man1 /usr/share/man/man2 you'll run into dependency problems with ca-certificates, openjdk-11-jre-headless etc. I've been using this fix provided by community, haven't really checked the permanent fix.

install python3.6 on amazonlinux docker image

I have been experimenting to create a docker image with python3.6 based on amazonlinux.
So far, I have not been very successful. I use
docker run -it amazonlinux
to start an interactive docker terminal. Inside the terminal, I run "yum install python36" and see the following error message. Note that I copied this step was from an old amazonlinux based Dockerfile. This Dockerfile used to work. So I suspend the error I see below is due to amazon updated their docker linux image
bash-4.2# yum install python36
Loaded plugins: ovl, priorities
amzn2-core | 2.4 kB 00:00:00
No package python36 available.
Error: Nothing to do
I have tried to add a python3.6 repo by following this post
https://janikarhunen.fi/how-to-install-python-3-6-1-on-centos-7 however, it still gives the same error when I run
yum install python36u
Is there any way to add python3.6 to amazonlinux base layer? Thanks in advance.
There is now a far easier answer to this question thanks to aws 'extras'. Now this will work:
amazon-linux-extras install python3
You can check this Dockerfile based on amazon Linux and having python version is PYTHON_VERSION=3.6.4.
Or you can work with your existing one like
ARG PYTHON_VERSION=3.6.4
ARG BOTO3_VERSION=1.6.3
ARG BOTOCORE_VERSION=1.9.3
ARG APPUSER=app
RUN yum -y update &&\
yum install -y shadow-utils findutils gcc sqlite-devel zlib-devel \
bzip2-devel openssl-devel readline-devel libffi-devel && \
groupadd ${APPUSER} && useradd ${APPUSER} -g ${APPUSER} && \
cd /usr/local/src && \
curl -O https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz && \
tar -xzf Python-${PYTHON_VERSION}.tgz && \
cd Python-${PYTHON_VERSION} && \
./configure --enable-optimizations && make && make altinstall && \
rm -rf /usr/local/src/Python-${PYTHON_VERSION}* && \
yum remove -y shadow-utils audit-libs libcap-ng && yum -y autoremove && \
yum clean all
But better to clone the repo and make your own image form that.
I too had similiar issue for docker.
yum install docker
Loaded plugins: ovl, priorities
amzn2-core | 3.7 kB 00:00:00
No package docker available.
Error: Nothing to do
instead yum I used amazon-linux-extras, it worked
amazon-linux-extras install docker

copy or add command not executed on docker hub

This dockerfile works as expected on my laptop. But it fails if I use automated builds on docker hub.
FROM ubuntu
# Install required software via apt and pip
RUN apt-get -y update && \
apt-get install -y \
awscli \
python \
python-pip \
software-properties-common \
&& add-apt-repository ppa:ubuntugis/ppa \
&& apt-get -y update \
&& apt-get install -y \
gdal-bin \
&& pip install boto3
# Copy Build Thumbnail script to Docker image and add execute permissions
COPY build-thumbnails.py build-thumbnails.py
RUN chmod +x build-thumbnails.py
The error is:
Step 6/7 : COPY build-thumbnails.py build-thumbnails.py
COPY failed: stat /var/lib/docker/tmp/docker-builder259560514/build-thumbnails.py: no such file or directory
The repo is here...
https://github.com/shantanuo/docker/blob/master/batch/Dockerfile
Why would copy or add command not work for automated builds?
Seems like other people have had the same issue see here:
https://forums.docker.com/t/docker-build-failing-on-docker-hub/76191/2
The solution is to set the build context appropriately so that the relative path >in the Dockerfile COPY is correct.
In your Docker Hub repository go to “Builds” and click on “Configure Automated >Builds”. There you can set the “Build Context” for each build rule.
Check the last answer on this page too:
https://github.com/docker/hub-feedback/issues/811
Let me know if that helps!

docker-compose update from S3 bucket

Our Dockerfile invokes a python script which copies a binary from S3 to /usr/bin. This works fine the first time. But from then on "docker-compose build" does nothing because everything is cached. This is a problem if the binary has changed.
Short of building with --no-cache, what is the best way to make sure "docker-compose build" will always pick up the new binary if there is one. We don't mind if it unnecessarily downloads the binary even if unchanged, so long as it does work then the binary has changed.
Seems like we want a Dockerfile step that always executes?
FROM ubuntu:trusty
RUN apt-get update
RUN apt-get -y install software-properties-common
RUN apt-get -y install --reinstall ca-certificates
RUN add-apt-repository ppa:fkrull/deadsnakes
RUN apt-get update && apt-get install -y \
curl \
wget \
vim \
git \
python3.5 \
python3-pip \
python3-setuptools \
libpcap0.8-dev
RUN ln -sf /usr/bin/python3.5 /usr/bin/python3
ADD . /app
WORKDIR /app
# Install Python Requirements
RUN pip3 install -r etc/python/requirements.txt
# Download/Install processor and associated libs
RUN python3 setup_processor.py
RUN mkdir -p /logs
ENTRYPOINT ["/app/entrypoint.sh"]
Where setup_processor.py downloads directly from S3 to /usr/bin.
So as of now there is no direct feature like this. But there is a workaround to your solution.
Add Build argument before your download step
ARG BUILD_ON=now
# Download/Install processor and associated libs
RUN python3 setup_processor.py
While building the image use below
docker build --build-arg BUILD_ON=$(date) ....
This will always make sure that you get a change in the ARG step and all steps cache after that will be invalidated
A feature has already been requested and being worked out on below thread
https://github.com/moby/moby/issues/1996

Resources