Docker chrome selenium java openjdk:8 to Alpine - docker

I have a Dockerfile that already works in openjdk:8 but I am trying to convert it to alpine. It is giving me some troubles. The application was made in Java and uses Selenium. This is my current code:
FROM openjdk:8-jdk-alpine
RUN apk update \
&& apk fetch gnupg \
&& apk add --virtual \
curl wget xvfb unzip gnupg \
&& gpg --list-keys
ARG CHROME_DRIVER_VERSION=85.0.4183.87
RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \
&& apk update \
&& apk add google-chrome-stable \
&& apk cache clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
&& wget https://chromedriver.storage.googleapis.com/${CHROME_DRIVER_VERSION}/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
EXPOSE 42052
.
.
.
I tried to add gnupg like I found in here:
Docker: Using apt-key with alpine image
But it does not work, I just get an error: /bin/sh: gpg: not found
If I removed it, I just get the issue with apt-key that is not found. What is the alternative in alpine or what changes do I have to do to my docker file to work again.
Thanks in advance

Apparently the Chrome .deb file won't work on Alpine. So it needs Chromium to work. If you are already using the ChromeDriver in the Java code it will work without making any changes like in my case.
FROM openjdk:8-jdk-alpine
RUN apk update && apk add --no-cache bash \
alsa-lib \
at-spi2-atk \
atk \
cairo \
cups-libs \
dbus-libs \
eudev-libs \
expat \
flac \
gdk-pixbuf \
glib \
libgcc \
libjpeg-turbo \
libpng \
libwebp \
libx11 \
libxcomposite \
libxdamage \
libxext \
libxfixes \
tzdata \
libexif \
udev \
xvfb \
zlib-dev \
chromium \
chromium-chromedriver \
&& rm -rf /var/cache/apk/* \
/usr/share/man \
/tmp/*
RUN mkdir -p /data && adduser -D chrome \
&& chown -R chrome:chrome /data
USER chrome
.
.
.
If you are going to add create folders and/or add files like in my case, just add USER root to work
It will work the same as the openjdk:8 version.

Actually alpine version in answer post to correct work has to add in code:
chromeOptions.setBinary("/usr/bin/chromium-browser");

Related

Sudo not available in docker image despite being installed in base image

I can't access Sudo from my container, despite it looking like the base image has it installed.
nistmni#ca5af2f4aace:~$ sudo echo x
bash: sudo: command not found
nistmni#ca5af2f4aace:~$ /bin/sudo
bash: /bin/sudo: No such file or directory
My Dockerfile is simple:
FROM nistmni/minc-toolkit
RUN mkdir ~/execute
COPY . ~/execute/
CMD /bin/bash
The Dockerfile for nistmni/minc-toolkit is:
FROM ubuntu:xenial
# install basic system packages
RUN apt-get -y update && \
apt-get -y dist-upgrade && \
apt-get install -y --no-install-recommends \
sudo \
build-essential g++ gfortran bc \
bison flex \
libx11-dev x11proto-core-dev \
libxi6 libxi-dev \
libxmu6 libxmu-dev libxmu-headers \
libgl1-mesa-dev libglu1-mesa-dev \
libjpeg-dev \
libssl-dev ccache libapt-inst2.0 git lsb-release \
curl ca-certificates unzip && \
apt-get autoclean && \
rm -rf /var/lib/apt/lists/*
# add user to build all tools
RUN useradd -ms /bin/bash nistmni && \
echo "nistmni ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/nistmni && \
chmod 0440 /etc/sudoers.d/nistmni
ENV PATH=/usr/lib/ccache:$PATH
# add new cmake
RUN mkdir src && \
cd src && \
curl -L --output cmake-3.14.5.tar.gz https://github.com/Kitware/CMake/releases/download/v3.14.5/cmake-3.14.5.tar.gz && \
tar zxf cmake-3.14.5.tar.gz && \
cd cmake-3.14.5 && \
./configure --prefix=/usr --no-qt-gui && \
make && \
make install && \
cd ../../ && \
rm -rf src
USER nistmni
ENV HOME /home/nistmni
WORKDIR /home/nistmni
Is this:
Some subtlety of base images I don't understand
Sudo is getting removed somewhere I'm not seeing
Not actually the Dockerfile being used to create the image
Thanks.

Cannot get npm installed app to be present in final Docker image

I have added installation of the Vega tools to the docker-asciidoctor Dockerfile and they are present when running the bats tests, but when I run the image they are no longer present.
I am quite new to Docker.
I have also tried some variations of adding the dir node_modules to the path, but nothing works.
In all cases the directory where the Vega tools are installed to is simply not in the image.
I am adding the Vega tools like this:
...
&& npm install --build-from-source -g vega-cli vega vega-lite vega-embed \
&& echo `which vl2vg` \
...
which provides this output:
...
/usr/bin/vl2png -> /usr/lib/node_modules/vega-lite/bin/vl2png
/usr/bin/vl2svg -> /usr/lib/node_modules/vega-lite/bin/vl2svg
/usr/bin/vl2vg -> /usr/lib/node_modules/vega-lite/bin/vl2vg
/usr/bin/vg2pdf -> /usr/lib/node_modules/vega-cli/bin/vg2pdf
/usr/bin/vg2png -> /usr/lib/node_modules/vega-cli/bin/vg2png
/usr/bin/vg2svg -> /usr/lib/node_modules/vega-cli/bin/vg2svg
...
/usr/bin/vl2vg
just as one would expect.
And the test that one of the tools are there looks like this:
#test "vl2vg is installed and in the path" {
docker run -t --rm "${DOCKER_IMAGE_NAME_TO_TEST}" which vl2vg
}
That passes.
I would expect the Vega tools to be available in the image when I do the following:
docker run -it --entrypoint /bin/sh asciidoctor/docker-asciidoctor
/documents # which vl2vg
which: no vl2vg in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin)
Given the output from the build of the image I would have expected to see /usr/bin/vl2vg and /usr/lib/node_modules is not there.
For completeness here is the Docker file and the steps taken:
FROM alpine:3.9
LABEL MAINTAINERS="Guillaume Scheibel <guillaume.scheibel#gmail.com>, Damien DUPORTAL <damien.duportal#gmail.com>"
ARG asciidoctor_version=2.0.9
ARG asciidoctor_confluence_version=0.0.2
ARG asciidoctor_pdf_version=1.5.0.alpha.17
ARG asciidoctor_diagram_version=1.5.16
ARG asciidoctor_epub3_version=1.5.0.alpha.9
ARG asciidoctor_mathematical_version=0.3.0
ARG asciidoctor_revealjs_version=2.0.0
ENV ASCIIDOCTOR_VERSION=${asciidoctor_version} \
ASCIIDOCTOR_CONFLUENCE_VERSION=${asciidoctor_confluence_version} \
ASCIIDOCTOR_PDF_VERSION=${asciidoctor_pdf_version} \
ASCIIDOCTOR_DIAGRAM_VERSION=${asciidoctor_diagram_version} \
ASCIIDOCTOR_EPUB3_VERSION=${asciidoctor_epub3_version} \
ASCIIDOCTOR_MATHEMATICAL_VERSION=${asciidoctor_mathematical_version} \
ASCIIDOCTOR_REVEALJS_VERSION=${asciidoctor_revealjs_version}
# Installing package required for the runtime of
# any of the asciidoctor-* functionnalities
RUN apk add --no-cache \
bash \
curl \
ca-certificates \
findutils \
font-bakoma-ttf \
graphviz \
inotify-tools \
make \
openjdk8-jre \
py2-pillow \
py-setuptools \
python2 \
ruby \
ruby-mathematical \
ttf-liberation \
unzip \
which
RUN addgroup --system appgroup && adduser -S appuser -G appgroup
WORKDIR /data/
# Installing Ruby Gems needed in the image
# including asciidoctor itself
RUN apk add --no-cache --virtual .rubymakedepends \
build-base \
libxml2-dev \
ruby-dev \
&& gem install --no-document \
"asciidoctor:${ASCIIDOCTOR_VERSION}" \
"asciidoctor-confluence:${ASCIIDOCTOR_CONFLUENCE_VERSION}" \
"asciidoctor-diagram:${ASCIIDOCTOR_DIAGRAM_VERSION}" \
"asciidoctor-epub3:${ASCIIDOCTOR_EPUB3_VERSION}" \
"asciidoctor-mathematical:${ASCIIDOCTOR_MATHEMATICAL_VERSION}" \
asciimath \
"asciidoctor-pdf:${ASCIIDOCTOR_PDF_VERSION}" \
"asciidoctor-revealjs:${ASCIIDOCTOR_REVEALJS_VERSION}" \
coderay \
epubcheck:3.0.1 \
haml \
kindlegen:3.0.3 \
pygments.rb \
rake \
rouge \
slim \
thread_safe \
tilt \
&& apk add --update npm \
# && npm -g config set user root \
&& apk --no-cache --virtual .canvas-build-deps add \
build-base \
cairo-dev \
jpeg-dev \
pango-dev \
giflib-dev \
pixman-dev \
pangomm-dev \
libjpeg-turbo-dev \
freetype-dev \
&& apk --no-cache add \
pixman \
cairo \
pango \
giflib \
# && npm -g config set user root \
&& npm config set user 0 \
&& npm config set unsafe-perm true \
&& npm install --build-from-source -g vega-cli vega vega-lite vega-embed \
&& echo `which vl2vg` \
# && apk del .canvas-build-deps \
&& apk del -r --no-cache .rubymakedepends
ENV PATH /data/node_modules/.bin:$PATH
ENV NODE_PATH /data/node_modules/
# Installing Python dependencies for additional
# functionnalities as diagrams or syntax highligthing
RUN apk add --no-cache --virtual .pythonmakedepends \
build-base \
python2-dev \
py2-pip \
&& pip install --upgrade pip \
&& pip install --no-cache-dir \
actdiag \
'blockdiag[pdf]' \
nwdiag \
Pygments \
seqdiag \
&& apk del -r --no-cache .pythonmakedepends
USER appuser
WORKDIR /documents
VOLUME /documents
CMD ["/bin/bash"]
The build command is (lifted from the Makefile):
DOCKER_IMAGE_NAME ?= docker-asciidoctor
DOCKERHUB_USERNAME ?= asciidoctor
DOCKER_IMAGE_TEST_TAG ?= $(shell git rev-parse --short HEAD)
build:
docker build \
-t $(DOCKER_IMAGE_NAME_TO_TEST) \
-f Dockerfile \
$(CURDIR)/
Testing is with:
test:
bats $(CURDIR)/tests/*.bats
where one of the tests is the one mentioned above.
The root cause for this was that I wasn't using the image that I had just built since I wasn't tagging the image.
docker tag asciidoctor/docker-asciidoctor:442d4d0 asciidoctor/docker-asciidoctor:latest
and then things worked.
Thanks for this! - super helpful, especially the tip about npm install -g canvas and vega-cli using the npm - g config set user root.
I was building from a different base image: mhart/alpine-node. Here's the Dockerfile that worked for me..
FROM mhart/alpine-node:12.16.2
WORKDIR /app
COPY *.jar .
RUN apk add --no-cache \
python2 \
build-base \
g++ \
cairo-dev \
jpeg-dev \
pango-dev \
bash \
imagemagick
RUN npm -g config set user root \
&& npm install -g canvas \
&& npm install -g vega vega-lite vega-cli
# USER root
# RUN apk update
# RUN apk fetch openjdk8
# RUN apk add openjdk8
The four lines commented out at the each are for installing java as well.

Docker image Size increases if I remove few lines of code

I'm trying to reduce the docker image size, but Dockerfile is being weird.
I concatenate the RUN command to reduce the size of the image. When I build the below Dockerfile it creates only 235MB.
FROM nginx:alpine
RUN apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl \
pcre-dev \
zlib-dev \
linux-headers \
curl \
gnupg \
libxslt-dev \
gd-dev \
perl-dev \
&& apk add --no-cache --virtual .libmodsecurity-deps \
pcre-dev \
libxml2-dev \
git \
libtool \
automake \
autoconf \
g++ \
flex \
bison \
yajl-dev \
git \
# Add runtime dependencies that should not be removed
&& apk add --no-cache \
doxygen \
geoip \
geoip-dev \
yajl \
libstdc++ \
sed \
# Installing ModSec Library version 3
&& echo "Installing ModSec Library" \
&& git clone -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity /opt/ModSecurity \
&& cd /opt/ModSecurity \
&& git submodule init \
&& git submodule update \
&& ./build.sh \
&& ./configure && make && make install \
&& echo "Finished Installing ModSec Library" \
# Installing ModSec - Nginx connector
&& cd /opt \
&& echo 'Installing ModSec - Nginx connector' \
&& git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git \
&& wget http://nginx.org/download/nginx-$NGINX_VERSION.tar.gz \
&& tar zxvf nginx-$NGINX_VERSION.tar.gz \
# Adding Nginx Connector Module
&& cd /opt/nginx-$NGINX_VERSION \
&& ./configure --with-compat --add-dynamic-module=../ModSecurity-nginx \
&& make modules \
&& cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules \
&& echo "Finished Installing ModSec - Nginx connector" \
# Begin installing ModSec OWASP Rules
&& echo "Begin installing ModSec OWASP Rules" \
&& mkdir /etc/nginx/modsec \
&& wget -P /etc/nginx/modsec/ https://raw.githubusercontent.com/SpiderLabs/ModSecurity/v3/master/modsecurity.conf-recommended \
&& mv /etc/nginx/modsec/modsecurity.conf-recommended /etc/nginx/modsec/modsecurity.conf \
&& sed -i 's/SecRuleEngine DetectionOnly/SecRuleEngine On/' /etc/nginx/modsec/modsecurity.conf \
# Fetching owasp-modsecurity-crs
&& cd /opt \
&& git clone -b v3.0/master https://github.com/SpiderLabs/owasp-modsecurity-crs \
&& mv owasp-modsecurity-crs/ /usr/local/ \
&& cp /usr/local/owasp-modsecurity-crs/crs-setup.conf.example /usr/local/owasp-modsecurity-crs/crs-setup.conf \
# Creating modsec file
&& echo 'Creating modsec file' \
&& echo -e '# From https://github.com/SpiderLabs/ModSecurity/blob/master/\n \
# modsecurity.conf-recommended\n \
# Edit to set SecRuleEngine On\n \
Include "/etc/nginx/modsec/modsecurity.conf"\n \
# OWASP CRS v3 rules\n \
Include "/usr/local/owasp-modsecurity-crs/crs-setup.conf"\n \
Include "/usr/local/owasp-modsecurity-crs/rules/*.conf"'\
>>/etc/nginx/modsec/main.conf \
&& chown nginx:nginx /etc/nginx/modsec/main.conf \
# Removing old Nginx conf files
&& rm -fr /etc/nginx/conf.d/ \
&& rm -fr /etc/nginx/nginx.conf \
&& chown -R nginx:nginx /usr/share/nginx \
# delete uneeded and clean up
&& apk del .build-deps \
&& apk del .libmodsecurity-deps \
&& rm -fr ModSecurity \
&& rm -fr ModSecurity-nginx \
&& rm -fr nginx-$NGINX_VERSION.tar.gz \
&& rm -fr nginx-$NGINX_VERSION
COPY conf/nginx.conf /etc/nginx
COPY conf/conf.d /etc/nginx/conf.d
COPY errors /usr/share/nginx/errors
WORKDIR /usr/share/nginx/html
CMD nginx -g 'daemon off;'
EXPOSE 80
I have seen the docker history imagedId it shows that this RUN command has an increased size around 855MB. Anybody Understand why it is behaving weird?
Any thoughts would be much helpful, its is hard to debug building the image everytime.
I tried building in both ways and found not much difference.
Most of the disk space is consumed by /opt/ModSecurity
Initially it was 74MB after git clone.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
oldimage latest 924a8d4f941e 11 minutes ago 867MB
newimage latest d1ca029927c2 About an hour ago 867MB
nginx alpine ebe2c7c61055 6 days ago 18MB
However after building the complete build - it has grown to ~650MB.
$ du -sh *
639.7M ModSecurity
408.0K ModSecurity-nginx
7.5M nginx-1.13.12
996.0K nginx-1.13.12.tar.gz

PHP 7.1 Docker build will not compile with pgsql

I am using the polinux/httpd:centos repo to run Apache and PHP 7.1. The build seems to go ok. There are a few warnings related to keys, but none related to php or pgsql. The build completes successfully, but when I ssh into the container the module is not listed (php -m) and there's no extension config file in php.d.
I verified it is listed in the Dockerfile multiple times.
I can install php71-php-pgsql manually after starting the container, but then I can't restart Apache without restarting the container.
I've tried moving yum install php71-php-pgsql to the end of the Dockerfile as a separate RUN command (in addition to the original), but it reports it has already been installed, yet when I ssh into the container its not listed in the modules and no config, as mentioned above.
When I rebuild a container I stop and remove it, then run build with the no-cache option.
I'm stumped...
The Dockerfile is quite long, but I can post if that would be helpful.
Thanks.
UPDATE: Dockerfile per request...
FROM polinux/httpd:centos
ENV \
NVM_DIR="/usr/local/nvm" \
NODE_VERSION="9.2.0" \
GIT_VERSION="2.15.0" \
PHP_VERSION="71"
ADD mariadb.repo /etc/yum.repos.d/mariadb.repo
RUN \
rpm --rebuilddb && yum clean all && rm -rf /var/cache/yum && \
yum update -y && \
yum install -y \
wget \
patch \
bzip2 \
unzip \
make \
openssh-clients \
git \
MariaDB-client && \
rpm -Uvh http://rpms.remirepo.net/enterprise/remi-release-7.rpm && \
yum install -y \
php${PHP_VERSION}-php \
php${PHP_VERSION}-php-bcmath \
php${PHP_VERSION}-php-cli \
php${PHP_VERSION}-php-common \
php${PHP_VERSION}-php-devel \
php${PHP_VERSION}-php-fpm \
php${PHP_VERSION}-php-gd \
php${PHP_VERSION}-php-gmp \
php${PHP_VERSION}-php-intl \
php${PHP_VERSION}-php-json \
php${PHP_VERSION}-php-mbstring \
php${PHP_VERSION}-php-mcrypt \
php${PHP_VERSION}-php-mysqlnd \
php${PHP_VERSION}-php-pgsql \
php${PHP_VERSION}-php-opcache \
php${PHP_VERSION}-php-pdo \
php${PHP_VERSION}-php-pear \
php${PHP_VERSION}-php-process \
php${PHP_VERSION}-php-pspell \
php${PHP_VERSION}-php-xml \
php${PHP_VERSION}-php-pecl-imagick \
php${PHP_VERSION}-php-pecl-mysql \
php${PHP_VERSION}-php-pecl-uploadprogress \
php${PHP_VERSION}-php-pecl-uuid \
php${PHP_VERSION}-php-pecl-memcache \
php${PHP_VERSION}-php-pecl-memcached \
php${PHP_VERSION}-php-pecl-redis \
php${PHP_VERSION}-php-pecl-zip && \
ln -sfF /opt/remi/php${PHP_VERSION}/enable /etc/profile.d/php${PHP_VERSION}-paths.sh && \
ln -sfF /opt/remi/php${PHP_VERSION}/root/usr/bin/{pear,pecl,phar,php,php-cgi,php-config,phpize} /usr/local/bin/. && \
mv -f /etc/opt/remi/php${PHP_VERSION}/php.ini /etc/php.ini && ln -s /etc/php.ini /etc/opt/remi/php${PHP_VERSION}/php.ini && \
rm -rf /etc/php.d && mv /etc/opt/remi/php${PHP_VERSION}/php.d /etc/. && ln -s /etc/php.d /etc/opt/remi/php${PHP_VERSION}/php.d && \
yum install -y \
ImageMagick \
GraphicsMagick \
gcc \
gcc-c++ \
libffi-devel \
libpng-devel \
zlib-devel && \
yum install -y ruby ruby-devel && \
echo 'gem: --no-document' > /etc/gemrc && \
gem update --system && \
gem install bundler && \
export PROFILE=/etc/profile.d/nvm.sh && touch $PROFILE && \
curl -sSL https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash && \
source $NVM_DIR/nvm.sh && \
nvm install $NODE_VERSION && \
nvm alias default $NODE_VERSION && \
nvm use default && \
npm install -g \
gulp \
grunt-cli \
bower \
browser-sync && \
echo -e "StrictHostKeyChecking no" >> /etc/ssh/ssh_config && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
chown apache /usr/local/bin/composer && composer --version && \
yum clean all && rm -rf /tmp/yum* && \
sed -i 's|SetHandler application/x-httpd-php|SetHandler "proxy:fcgi://127.0.0.1:9000"|g' /etc/httpd/conf.d/php${PHP_VERSION}-php.conf
ADD container-files /
ENV \
NODE_PATH=$NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules \
PATH=$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
RUN \
mkdir -p /data/tmp/php && \
chmod -R 777 /data/tmp
# Weird issue: For some reason pgsql is not installed above. May be OoO...
Manually installing worked, so adding here at the end.
RUN \
yum install php71-php-pgsql -y
Chalk this up to inexperience...
This turned out to be a problem with how I was referencing images, first when building and then running the instance. I'm still not sure I fully understand, but I think I was running the base container instead of the modified build.
For others that may run into similar problems, the 2 commands that helped me get this working are:
docker build --rm -t local/httpd-php71 .
... and then ...
docker run \
-d \
--name httpd-php71 \
--restart unless-stopped \
--net dockersubnet \
--volume /www:/var/www \
local/httpd-php71
Where 'local/httpd-php71' is my local/custom build. Before I was not using any tag reference in the build command and then I was referencing 'polinux/httpd:centos', the base, in the run command.
Thanks.

Installing miniconda on alpine linux fails

I have been attempting to install miniconda on an Alpine linux docker image. The minimal "working" example of my failure can be reproduced with Docker as follows:
docker run --rm -it alpine sh
/ # apk update && apk add ca-certificates wget && update-ca-certificates
/ # wget https://repo.continuum.io/miniconda/Miniconda3-4.3.27-Linux-x86_64.sh -O ~/miniconda.sh
/ # sh miniconda.sh -b
PREFIX=/root/miniconda3
installing: python-3.6.2-h02fb82a_12 ...
/root/miniconda.sh: line 361: /root/miniconda3/pkgs/python-3.6.2-h02fb82a_12/bin/python: not found
The file that it looks for is there, though:
/ # ls /root/miniconda3/pkgs/python-3.6.2-h02fb82a_12/bin/python
/root/miniconda3/pkgs/python-3.6.2-h02fb82a_12/bin/python
I would appreciate some insight on this error. I have little idea of what to try next
According to #VladFrolov, anaconda's python is linked to glibc, which isn't available in alpine. For more details about how he built an alpine image with conda, look at https://github.com/frol/docker-alpine-miniconda3
PS: Looks like #VladFrolov now maintains miniconda3:alpine official image https://github.com/ContinuumIO/docker-images/blob/master/miniconda3/alpine/Dockerfile ( Thx for pointing out #rpanai )
You can add this before running the ./miniconda.sh -b:
apk --update add \
bash \
curl \
wget \
ca-certificates \
libstdc++ \
glib \
&& wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://raw.githubusercontent.com/sgerrand/alpine-pkg-node-bower/master/sgerrand.rsa.pub \
&& curl -L "https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.23-r3/glibc-2.23-r3.apk" -o glibc.apk \
&& apk add glibc.apk \
&& curl -L "https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.23-r3/glibc-bin-2.23-r3.apk" -o glibc-bin.apk \
&& apk add glibc-bin.apk \
&& curl -L "https://github.com/andyshinn/alpine-pkg-glibc/releases/download/2.25-r0/glibc-i18n-2.25-r0.apk" -o glibc-i18n.apk \
&& apk add --allow-untrusted glibc-i18n.apk \
&& /usr/glibc-compat/bin/localedef -i en_US -f UTF-8 en_US.UTF-8 \
&& /usr/glibc-compat/sbin/ldconfig /lib /usr/glibc/usr/lib \
&& rm -rf glibc*apk /var/cache/apk/*

Resources