Containerized Terragrunt: Error while installing cloudflare/cloudflare x509: certificate signed by unknown authority - docker

I run terragrunt in Docker container with the following providers configuration
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.41"
}
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 3.32"
}
}
}
I run the following command to create local Docker container
$path = 'C:\git\'
docker run --rm -it `
-e ARM_CLIENT_ID=$appid `
-e ARM_CLIENT_SECRET=$password `
-e ARM_TENANT_ID=$tenant `
-e ARM_SUBSCRIPTION_ID=$subscription `
-v ${path}:/terragrunt-folder terragrunt:1.0 sh
During /terragrunt-folder/qa/eastus/002/a_service # terragrunt init invocation I am getting the following error message
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Reusing previous version of cloudflare/cloudflare from the dependency lock file
- Finding latest version of hashicorp/azuread...
- Reusing previous version of hashicorp/time from the dependency lock file
- Installing hashicorp/azurerm v3.43.0...
- Installed hashicorp/azurerm v3.43.0 (signed by HashiCorp)
- Installing hashicorp/azuread v2.34.1...
- Installed hashicorp/azuread v2.34.1 (signed by HashiCorp)
- Installing hashicorp/time v0.9.1...
- Installed hashicorp/time v0.9.1 (signed by HashiCorp)
╷
│ Error: Failed to install provider
│
│ Error while installing cloudflare/cloudflare v3.34.0: could not query
│ provider registry for registry.terraform.io/cloudflare/cloudflare: failed
│ to retrieve authentication checksums for provider: the request failed,
│ please try again later: Get
│ "https://objects.githubusercontent.com/github-production-release-asset-2e65be/93446113/c6fed044-e8e2-4b3f-a40e-d0eef378d5a4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230218%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230218T155203Z&X-Amz-Expires=300&X-Amz-Signature=783ec3bf93b7375d94f2917936b74116dc1e082707356c47e94068407102d603&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=93446113&response-content-disposition=attachment%3B%20filename%3Dterraform-provider-cloudflare_3.34.0_SHA256SUMS&response-content-type=application%2Foctet-stream":
│ x509: certificate signed by unknown authority
╵
ERRO[0051] Terraform invocation failed in /terragrunt-folder/qa/eastus/002/a_service/.terragrunt-cache/fkoLZJwS3kZvCk8fldyKdEtQN24/YVeC5shlCd8w03Dinw3RCnNsmSs/app_service_sql_server_batch prefix=[/terragrunt-folder/qa/eastus/002/analysis_service]
ERRO[0051] 1 error occurred:
* exit status 1
I was able to copy provider binary manually into terragrunt cache folder at
c:/git/qa/eastus/002/a_service/.terragrunt-cache/fkoLZJwS3kZvCk8fldyKdEtQN24/YVeC5shlCd8w03Dinw3RCnNsmSs/app_service_sql_server_batch/.terraform/providers/registry.terraform.io/cloudflare/cloudflare/3.32.0/linux_amd64/ and it resolves the issue since in this case terraform skips the download provider stage.
The problem is that it's extremely inconvenient since I have multiple services each of which is encapsulated as a module and those modules are referenced through terragrunt.hcl files.
├───a_service
│ terraform.tfvars
│ terragrunt.hcl
│
├───b_service
│ terraform.tfvars
│ terragrunt.hcl
│
├───c_service
│ terraform.tfvars
│ terragrunt.hcl
│
├───d_service
│ terraform.tfvars
│ terragrunt.hcl
│
├───e_service
│ terraform.tfvars
│ terragrunt.hcl
│
and more ...
EDIT 1
Here is my Docker file
FROM alpine:3.16 as builder
# Install build dependencies
RUN set -eux \
&& apk --no-cache add \
coreutils \
curl \
dpkg \
git \
unzip
# Get Terraform
ARG VERSION=1.3.7
RUN set -eux \
&& if [ "$(dpkg --print-architecture | awk -F'-' '{print $NF}' )" = "i386" ]; then\
ARCH=386; \
elif [ "$(uname -m)" = "x86_64" ]; then \
ARCH=amd64; \
elif [ "$(uname -m)" = "aarch64" ]; then \
ARCH=arm64; \
elif [ "$(uname -m)" = "armv7l" ]; then \
ARCH=arm; \
fi \
\
&& curl --fail -sS -L -O \
https://releases.hashicorp.com/terraform/${VERSION}/terraform_${VERSION}_linux_${ARCH}.zip \
&& unzip terraform_${VERSION}_linux_${ARCH}.zip \
&& mv terraform /usr/bin/terraform \
&& chmod +x /usr/bin/terraform
# Get Terragrunt
ARG TG_VERSION=latest
RUN set -eux \
&& git clone https://github.com/gruntwork-io/terragrunt /terragrunt \
&& cd /terragrunt \
&& if [ "${TG_VERSION}" = "latest" ]; then \
VERSION="$( git describe --abbrev=0 --tags )"; \
else \
VERSION="$( git tag | grep -E "v${TG_VERSION}\.[.0-9]+" | sort -Vu | tail -1 )" ;\
fi \
# Get correct architecture
&& if [ "$(dpkg --print-architecture | awk -F'-' '{print $NF}' )" = "i386" ]; then\
ARCH=386; \
elif [ "$(uname -m)" = "x86_64" ]; then \
ARCH=amd64; \
elif [ "$(uname -m)" = "aarch64" ]; then \
ARCH=arm64; \
elif [ "$(uname -m)" = "armv7l" ]; then \
ARCH=arm; \
fi \
\
&& curl --insecure --fail -sS -L \
https://github.com/gruntwork-io/terragrunt/releases/download/${VERSION}/terragrunt_linux_${ARCH} \
-o /usr/bin/terragrunt \
&& chmod +x /usr/bin/terragrunt \
\
&& terraform --version \
&& terragrunt --version
FROM mcr.microsoft.com/azure-cli
RUN set -eux \
&& apk --no-cache add \
coreutils \
curl \
dpkg \
git \
unzip
COPY --from=builder /usr/bin/terraform /usr/bin/terraform
COPY --from=builder /usr/bin/terragrunt /usr/bin/terragrunt
EDIT 2
When I run in container
curl "https://objects.githubusercontent.com/github-production-release-asset-2e65be/93446113/c6fed044-e8e
2-4b3f-a40e-d0eef378d5a4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230218%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=202302
18T222220Z&X-Amz-Expires=300&X-Amz-Signature=14b5edce7c1a2f47d82389268701b2ede33da0992a473318dc98b359fbf38fc9&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&r
epo_id=93446113&response-content-disposition=attachment%3B%20filename%3Dterraform-provider-cloudflare_3.34.0_SHA256SUMS&response-content-type=application%2F
octet-stream"
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
If I run the same with --insecure flag
curl --insecure "https://objects.githubusercontent.com/github-production-release-asset-2e65be/93446113/c
6fed044-e8e2-4b3f-a40e-d0eef378d5a4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230218%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-
Date=20230218T222220Z&X-Amz-Expires=300&X-Amz-Signature=14b5edce7c1a2f47d82389268701b2ede33da0992a473318dc98b359fbf38fc9&X-Amz-SignedHeaders=host&actor_id=0
&key_id=0&repo_id=93446113&response-content-disposition=attachment%3B%20filename%3Dterraform-provider-cloudflare_3.34.0_SHA256SUMS&response-content-type=app
lication%2Foctet-stream"
03729b0fcf189e732aca54452a105d82fec839580cb5d0137317af9163e0e4dd terraform-provider-cloudflare_3.34.0_windows_arm64.zip
121b16a779e9f2fe8c96e98f32514ee9228346fc240ce12c3fb440958b93d127 terraform-provider-cloudflare_3.34.0_freebsd_arm64.zip
14509f521845eedd57a8791d76958e50bea4928760a152cd853e43f2c81a329b terraform-provider-cloudflare_3.34.0_linux_arm64.zip
273336ec2bc59ab90916706c074be27f3fe6ab42addc61a354a0ef5e10c2efa5 terraform-provider-cloudflare_3.34.0_linux_386.zip
54931c30f71666856c5d749698264c15196103667c87d961f3d293ff8a5c3237 terraform-provider-cloudflare_3.34.0_freebsd_amd64.zip
58a35eea3b9e1d2f39d7b5b1c6cf107b70eacdf5891017d6667902903db3bd94 terraform-provider-cloudflare_3.34.0_freebsd_arm.zip
5ec958afe392a76a1fea262d9070df839c4d811fc6ffd613a37f8b939ab159ef terraform-provider-cloudflare_3.34.0_linux_amd64.zip
7c24c0572aa9beee20a33cb18ac54d5088a09653e94664a9f74a9af2ae0e3554 terraform-provider-cloudflare_3.34.0_windows_arm.zip
890df766e9b839623b1f0437355032a3c006226a6c200cd911e15ee1a9014e9f terraform-provider-cloudflare_3.34.0_manifest.json
9248c43f795dbe54e07c6dbc2fb8e2f20aeac8f21ec91373d52b9975f285ba7e terraform-provider-cloudflare_3.34.0_darwin_arm64.zip
b09abd506601b7c3e0b3bfde0b8b9e1aed7f52b5ad629ef2865b8321852409c7 terraform-provider-cloudflare_3.34.0_darwin_amd64.zip
e00032df4cd4aad12adf3b7955fca3d1baa8bff9436c775588417da171a4e1d9 terraform-provider-cloudflare_3.34.0_freebsd_386.zip
e4a8812770914d6ce9d1f8399d702e3fb0ecc4bfd6220ba015fcb3884b243c69 terraform-provider-cloudflare_3.34.0_linux_arm.zip
f2ad0991ef0820b3fc5bd0a500be4dceffe0b5b2ac6c9c5fd17cbb350f2f1209 terraform-provider-cloudflare_3.34.0_windows_386.zip
fea3a9dfb1e752dc2864028049a4af05fabf7b62eb57fff26d139a424e3476fd terraform-provider-cloudflare_3.34.0_windows_amd64.zip
[12]+ Done(127) response-content-disposition=attachment%3B%20filename%3Dterraform-provider-cloudflare_3.34.0_SHA256SUMS
[11]+ Done curl
[9]+ Done curl
[8]+ Done curl
[6]+ Done(127) X-Amz-Signature=14b5edce7c1a2f47d82389268701b2ede33da0992a473318dc98b359fbf38fc9
[2]+ Done curl --insecure https://objects.githubusercontent.com/github-production-release-asset-2e65be/93446113/c6fed044-e8e2-4b3f-a40e-d0eef378d5a4?X-Amz-Algorithm=AWS4-HMAC-SHA256

Related

Laradock how to add package using apk instead of apt-get

I am using Laradock to deploy a Laravel app.
I am facing a problem with generating a PDF file to attach it in an email in a queued job. The pending jobs are handle by the php-worker container.
The problem is that when you want to attach a PDF to an email, which is queued (therefore, handled by the php-worker container) I get the following error:
"sh: /usr/local/bin/wkhtmltopdf: not found
which means that the wkhtmltopdf is not installed in the php-worker container.
So, taking a look at either the php-fpm or workspace Dockerfile, I can see how to install the wkhtmltopdf like so:
#####################################
# wkhtmltopdf:
#####################################
USER root
ARG INSTALL_WKHTMLTOPDF=false
RUN if [ ${INSTALL_WKHTMLTOPDF} = true ]; then \
apt-get install -yqq \
libxrender1 \
libfontconfig1 \
libx11-dev \
libjpeg62 \
libxtst6 \
fontconfig \
libjpeg62-turbo \
xfonts-base \
xfonts-75dpi \
wget \
&& wget https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.stretch_amd64.deb \
&& dpkg -i wkhtmltox_0.12.6-1.stretch_amd64.deb \
&& apt -f install \
;fi
If I copy that installation code into the php-worker container, I get the following error
/bin/sh: apt-get: not found
So, searching further, it seems the php-worker container is Alpine based, and probably needs apk add because of Alpine.
I have tried the following:
#####################################
# wkhtmltopdf:
#####################################
USER root
ARG INSTALL_WKHTMLTOPDF=false
RUN if [ ${INSTALL_WKHTMLTOPDF} = true ]; then \
apk add --no-cache \
libxrender1 \
libfontconfig1 \
libx11-dev \
libjpeg62 \
libxtst6 \
fontconfig \
libjpeg62-turbo \
xfonts-base \
xfonts-75dpi \
wget \
wkhtmltopdf \
;fi
But I haven't got luck.
ERROR: unable to select packages: wkhtmltopdf (no such package): required by: world[wkhtmltopdf]
I have been editing the Dockerfile based on this link and this is what I've modified so far:
Dockerfile
#
#--------------------------------------------------------------------------
# Image Setup
#--------------------------------------------------------------------------
#
ARG LARADOCK_PHP_VERSION
FROM php:${LARADOCK_PHP_VERSION}-alpine3.14
LABEL maintainer="Mahmoud Zalt <mahmoud#zalt.me>"
ARG LARADOCK_PHP_VERSION
# If you're in China, or you need to change sources, will be set CHANGE_SOURCE to true in .env.
ARG CHANGE_SOURCE=false
RUN if [ ${CHANGE_SOURCE} = true ]; then \
# Change application source from dl-cdn.alpinelinux.org to aliyun source
sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/' /etc/apk/repositories \
;fi
RUN apk --update add wget \
curl \
git \
build-base \
libmcrypt-dev \
libxml2-dev \
linux-headers \
pcre-dev \
zlib-dev \
autoconf \
cyrus-sasl-dev \
libgsasl-dev \
oniguruma-dev \
libressl \
libressl-dev \
supervisor
# ...................
#####################################
# wkhtmltopdf:
#####################################
USER root
ARG INSTALL_WKHTMLTOPDF=false
RUN set -xe; \
if [ ${INSTALL_WKHTMLTOPDF} = true ]; then \
# Install dependencies for wkhtmltopdf
apk add --update --no-cache --wait 10 \
&& apk --no-cache upgrade \
&& apk add --no-cache \
bash \
libstdc++ \
libx11 \
libxrender \
libxext \
libssl1.1 \
ca-certificates \
fontconfig \
freetype \
ttf-dejavu \
ttf-droid \
ttf-freefont \
ttf-liberation \
xvfb \
#libQt5WebKit \ This throws error. Commented out.
#libQt5WebKitWidgets \ This throws error. Commented out.
#ttf-ubuntu-font-family \ This throws error. Commented out.
&& apk add --update --no-cache --virtual .build-deps \
msttcorefonts-installer \
vim \
\
# Install microsoft fonts
&& update-ms-fonts \
&& fc-cache -f \
\
# Clean up when done
&& rm -rf /tmp/* \
&& apk del .build-deps \
&& wget http://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/wkhtmltopdf-0.12.6-r0.apk \
&& apk add --allow-untrusted wkhtmltopdf-0.12.6-r0.apk \
&& echo 'WKHTMLTOPDF INSTALLED?' \
&& which wkhtmltopdf \
# && ln -s /usr/bin/wkhtmltopdf /usr/local/bin/wkhtmltopdf \
&& cp /usr/bin/wkhtmltoimage /usr/local/bin/ \
&& cp /usr/bin/wkhtmltopdf /usr/local/bin/ \
&& chmod +x /usr/local/bin/wkhtmltoimage \
&& chmod +x /usr/local/bin/wkhtmltopdf \
&& echo 'wkhtmltopdf version: ' \
&& /usr/local/bin/wkhtmltopdf -V \
&& echo 'whoami & permissions' \
&& whoami \
&& ls -lah /usr/bin/ \
&& ls -lah /usr/local/bin/ \
;fi
#
#-----------------------------
# Set PHP memory_limit to infinity
#-------------------------------
#
RUN echo 'set php memory to -1:' \
&& sed -i 's/memory_limit = .*/memory_limit=-1 /' /usr/local/etc/php/php.ini-production \
&& sed -i 's/memory_limit = .*/memory_limit=-1 /' /usr/local/etc/php/php.ini-development \
&& cp /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini
# ...
Finally, the wkhtmltopdf seems to be installed:
+ apk add --allow-untrusted wkhtmltopdf-0.12.6-r0.apk
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/43) Installing icu-libs (67.1-r2)
(2/43) Installing libpcre2-16 (10.36-r0)
(3/43) Installing qt5-qtbase (5.15.3_git20210406-r0)
(4/43) Installing hicolor-icon-theme (0.17-r1)
(5/43) Installing wayland-libs-server (1.19.0-r0)
(6/43) Installing mesa-gbm (21.1.2-r0)
(7/43) Installing wayland-libs-client (1.19.0-r0)
(8/43) Installing qt5-qtdeclarative (5.15.3_git20210531-r0)
(9/43) Installing libxcomposite (0.4.5-r0)
(10/43) Installing wayland-libs-cursor (1.19.0-r0)
(11/43) Installing wayland-libs-egl (1.19.0-r0)
(12/43) Installing libxkbcommon (1.2.1-r0)
(13/43) Installing qt5-qtwayland (5.15.3_git20210510-r0)
(14/43) Installing mesa-egl (21.1.2-r0)
(15/43) Installing libevdev (1.11.0-r1)
(16/43) Installing mtdev (1.1.6-r0)
(17/43) Installing eudev-libs (3.2.10-r0)
(18/43) Installing libinput-libs (1.18.0-r0)
(19/43) Installing xcb-util-wm (0.4.1-r1)
(20/43) Installing xcb-util (0.4.0-r3)
(21/43) Installing xcb-util-image (0.4.0-r1)
(22/43) Installing xcb-util-keysyms (0.4.0-r1)
(23/43) Installing xcb-util-renderutil (0.3.9-r1)
(24/43) Installing libxkbcommon-x11 (1.2.1-r0)
(25/43) Installing qt5-qtbase-x11 (5.15.3_git20210406-r0)
(26/43) Installing qt5-qtsvg (5.15.3_git20200406-r0)
(27/43) Installing qt5-qtlocation (5.15.3_git20201109-r0)
(28/43) Installing qt5-qtsensors (5.15.3_git20201028-r1)
(29/43) Installing qt5-qtwebchannel (5.15.3_git20201028-r0)
(30/43) Installing libxv (1.0.11-r2)
(31/43) Installing alsa-lib (1.2.5-r2)
(32/43) Installing cdparanoia-libs (10.2-r9)
(33/43) Installing gstreamer (1.18.4-r0)
(34/43) Installing libogg (1.3.5-r0)
(35/43) Installing opus (1.3.1-r1)
(36/43) Installing orc (0.4.32-r0)
(37/43) Installing libtheora (1.1.1-r16)
(38/43) Installing libvorbis (1.3.7-r0)
(39/43) Installing gst-plugins-base (1.18.4-r0)
(40/43) Installing hyphen (2.8.8-r1)
(41/43) Installing libxslt (1.1.35-r0)
(42/43) Installing qt5-qtwebkit (5.212.0_alpha4-r14)
(43/43) Installing wkhtmltopdf (0.12.6-r0)
Executing busybox-1.33.1-r7.trigger
OK: 877 MiB in 254 packages
WKHTMLTOPDF INSTALLED?
+ echo 'WKHTMLTOPDF INSTALLED?'
+ which wkhtmltopdf
/usr/bin/wkhtmltopdf
+ cp /usr/bin/wkhtmltoimage /usr/local/bin/
+ cp /usr/bin/wkhtmltopdf /usr/local/bin/
+ chmod +x /usr/local/bin/wkhtmltoimage
+ chmod +x /usr/local/bin/wkhtmltopdf
+ echo 'wkhtmltopdf version: '
+ /usr/local/bin/wkhtmltopdf -V
wkhtmltopdf version:
wkhtmltopdf 0.12.6
+ echo 'whoami & permissions'
+ whoami
whoami & permissions
root
+ ls -lah /usr/bin/
-rwxr-xr-x 1 root root 979 Jun 1 2021 supervisorctl
-rwxr-xr-x 1 root root 975 Jun 1 2021 supervisord
-rwxr-xr-x 1 root root 114.1K Jun 11 2020 wkhtmltoimage
-rwxr-xr-x 1 root root 162.1K Jun 11 2020 wkhtmltopdf
+ ls -lah /usr/local/bin/
-rwxr-xr-x 1 root root 114.1K May 25 16:37 wkhtmltoimage
-rwxr-xr-x 1 root root 162.1K May 25 16:37 wkhtmltopdf
Step 82/86 : COPY supervisord.conf /etc/supervisord.conf
---> de059f102569
Step 83/86 : ENTRYPOINT ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisord.conf"]
BUT when I try to execute the container to verify that the wkhtmltopdf is indeed installed,
❯ docker container exec php-worker /usr/local/bin/wkhtmltopdf -V ─╯
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "/usr/local/bin/wkhtmltopdf": stat /usr/local/bin/wkhtmltopdf: no such file or directory: unknown
turns out that it's not been installed! And therefore, I get the exact same error in my application:
"sh: /usr/local/bin/wkhtmltopdf: not found
And, on the other hand, for example, the supervisor does work:
❯ docker container exec php-worker supervisorctl ─╯
laravel-scheduler:laravel-scheduler_00 RUNNING pid 52576, uptime 18:27:24
laravel-worker:laravel-worker_00 RUNNING pid 52577, uptime 18:27:24
supervisor>
Does anybody know how to install wkhtmltopdf in Alpine Dockerfile for real?
The PHP images you're using are built on Alpine 3.15; it looks like wkhtmltopdf isn't package in that version of Alpine:
$ docker run --rm alpine:3.15 sh -c 'apk add --update wkhtmltopdf'
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
ERROR: unable to select packages:
wkhtmltopdf (no such package):
required by: world[wkhtmltopdf]
It looks like wkhtmltopdf is only available in 3.14 and earlier (I
checked 3.14 and 3.13):
$ docker run --rm alpine:3.14 sh -c 'apk add --update wkhtmltopdf'
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/103) Installing dbus-libs (1.12.20-r2)
(2/103) Installing libgcc (10.3.1_git20210424-r2)
[...]
(103/103) Installing wkhtmltopdf (0.12.6-r0)
Executing busybox-1.33.1-r7.trigger
OK: 196 MiB in 117 packages
This is noted in the release notes for 3.15, which say:
QtWebKit was removed due to lack of upstream support
qt5-qtwebkit, kdewebkit, wkhtmltopdf, and py3-pdfkit have been removed due to known vulnerabilities and lack of upstream support for qtwebkit. Other programs have been adjusted to use qt5-qtwebengine where appropriate. The most direct replacement for wkhtmltopdf is weasyprint, which is available in the Alpine Linux community repository. puppeteer and pandoc are also options, depending on your needs. See #12888 for more information.
You could dry building your own PHP base image on top of an older Alpine release using the upstream Dockerfile, or you could try starting with the vanilla alpine:3.14 image and installing php using apk.
Or just stick with an Ubuntu-based image, which still packages
wkhtmltopdf.

Issues deploying Self-Hosted Agent on Linux (Ubuntu 18.04) Container

To begin, I followed this documentation in order to deploy a self-hosted agent on a linux container. I didn't do anything other than create the Dockerfile as start.sh file as it stated (copy and paste) to confirm I will add the files here:
Dockerfile
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
Start.sh
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
rm -rf /azp/agent
mkdir /azp/agent
cd /azp/agent
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
./config.sh remove --unattended \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE")
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_RESPONSE=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;api-version=3.0-preview' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=linux-x64")
if echo "$AZP_AGENT_RESPONSE" | jq . >/dev/null 2>&1; then
AZP_AGENTPACKAGE_URL=$(echo "$AZP_AGENT_RESPONSE" \
| jq -r '.value | map([.version.major,.version.minor,.version.patch,.downloadUrl]) | sort | .[length-1] | .[3]')
fi
if [ -z "$AZP_AGENTPACKAGE_URL" -o "$AZP_AGENTPACKAGE_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent - check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and installing Azure Pipelines agent..."
curl -LsS $AZP_AGENTPACKAGE_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run.sh & wait $!
Despite copy and pasting these from the documentation. I receive an error when it reaches the 3rd step (Configuring Azure Pipelines Agent) in the start.sh script.
Error message: qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory
If it helps, I am running docker on MacOS but as you can see the container is Ubuntu.
Thank you
According the documentation, we can know Both Windows and Linux are supported as container hosts. But the MacOS is not support as container hosts. So you can try to create a new Windows docker container to try again.

File not found in Docker image

while i'm not really into Docker i'm struggling with an issue of a not found file.
I've added a ls command to show if the file is really there. and sometimes it is, and sometimes it isn't, but always the 'file is missing' error occurs.
I'm running Docker Desktop Community V 2.0.0.3 (31259) on Win10-2004
It went wrong when a library is build:
Dockerfile:
ADD ./build_opus.sh /usr/local/sbin/
#added for debugging
RUN cd /usr/local/sbin && ls
RUN IFS=" " &&
for arch in $TARGET_ARCHS;
do
./build_opus.sh ${arch};
done
ouput:
---> Using cache
---> 4ddfdc31b266 Step 28/40 : ADD ./build_opus.sh /usr/local/sbin/
---> Using cache ---> e4c4ac7fea69
Step 29/40 : RUN cd /usr/local/sbin && ls
---> Using cache
---> 6fda1595d295 Step 30/40 : RUN IFS=" " && for arch in $TARGET_ARCHS; do .usr/local/sbin/build_opus.sh ${arch}; done
---> Running in 2fcf560d0dbc
/bin/sh: 1: .usr/local/sbin/build_opus.sh: not found
/bin/sh: 1: .usr/local/sbin/build_opus.sh: not found
/bin/sh: 1: .usr/local/sbin/build_opus.sh: not found
/bin/sh: 1: .usr/local/sbin/build_opus.sh: not found
The command '/bin/sh -c IFS=" " && for arch in $TARGET_ARCHS; do .usr/local/sbin/build_opus.sh ${arch}; done' returned a non-zero code: 127
Does anyone have an idea?
EDIT: ADDED FULL DOCKER FILE
Full Docker file:
FROM ubuntu:latest
##############################
# Download dependencies
##############################
RUN dpkg --add-architecture i386 && \
apt-get -y upgrade && \
apt-get -y dist-upgrade && \
apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install \
software-properties-common git curl bzip2 gcc g++ binutils make autoconf openssl \
libssl-dev ant libopus0 libpcre3 libpcre3-dev build-essential nasm libc6:i386 libstdc++6:i386 zlib1g:i386 \
openjdk-8-jdk unzip
##############################
# Configuration
##############################
# ENV TARGET_ARCHS "armeabi armeabi-v7a x86 mips arm64-v8a x86_64 mips64"
ENV TARGET_ARCHS "armeabi-v7a x86 arm64-v8a x86_64"
ENV ANDROID_NDK_DOWNLOAD_URL "https://dl.google.com/android/repository/android-ndk-r12b-linux-x86_64.zip"
ENV ANDROID_SDK_DOWNLOAD_URL "https://dl.google.com/android/repository/tools_r25.2.5-linux.zip"
ENV ANDROID_SETUP_APIS "23 25"
ENV ANDROID_BUILD_TOOLS_VERSION 25
ENV ANDROID_TARGET_API 23
#ENV PJSIP_DOWNLOAD_URL "http://www.pjsip.org/release/2.7.1/pjproject-2.7.1.tar.bz2"
ENV PJSIP_DOWNLOAD_URL "https://github.com/pjsip/pjproject/archive/2.9.tar.gz"
ENV SWIG_DOWNLOAD_URL "http://prdownloads.sourceforge.net/swig/swig-3.0.7.tar.gz"
ENV OPENSSL_DOWNLOAD_URL "https://www.openssl.org/source/openssl-1.0.2g.tar.gz"
ENV OPENH264_DOWNLOAD_URL "https://github.com/cisco/openh264/archive/v1.7.0.tar.gz"
ENV OPENH264_TARGET_NDK_LEVEL 23
ENV OPUS_DOWNLOAD_URL "http://downloads.xiph.org/releases/opus/opus-1.2.1.tar.gz"
ENV OPUS_ANDROID_MK_DOWNLOAD_URL "https://trac.pjsip.org/repos/raw-attachment/ticket/1904/Android.mk"
ENV PATH /sources/android_ndk:$PATH
##############################
# Download sources
##############################
RUN mkdir -p /sources/android_ndk && \
mkdir -p /sources/android_sdk && \
mkdir -p /sources/pjsip && \
mkdir -p /sources/swig && \
mkdir -p /sources/openssl && \
mkdir -p /sources/opus && \
mkdir -p /sources/openh264
# Download Android NDK
RUN cd /sources/android_ndk && \
curl -L -# -o ndk.zip "$ANDROID_NDK_DOWNLOAD_URL" && \
unzip ndk.zip && \
rm -rf ndk.zip && \
mv android-*/* ./
# Download Android SDK & APIs
RUN cd /sources/android_sdk && \
curl -L -# -o sdk.zip "$ANDROID_SDK_DOWNLOAD_URL" && \
unzip sdk.zip
RUN cd /sources/android_sdk/tools && \
ALL_SDK=$(./android list sdk --all) && \
IFS=" " && \
for api in $ANDROID_SETUP_APIS; \
do \
PACKAGE=$(echo "${ALL_SDK}" | grep "API ${api}" | head -n 1 | awk '{print $1}' | cut -d'-' -f 1); \
echo yes | ./android update sdk --all --filter ${PACKAGE} --no-ui --force; \
done && \
PACKAGE=$(echo "${ALL_SDK}" | grep "Android SDK Platform-tools" | head -n 1 | awk '{print $1}' | cut -d'-' -f 1) && \
echo yes | ./android update sdk --all --filter ${PACKAGE} --no-ui --force && \
PACKAGE=$(echo "${ALL_SDK}" | grep "Build-tools" | grep "${BUILD_TOOLS_VERSION}" | head -n 1 | awk '{print $1}' | cut -d'-' -f 1) && \
echo yes | ./android update sdk --all --filter ${PACKAGE} --no-ui --force
# Download Pjsip
RUN cd /sources/pjsip && \
curl -L -# -o pjsip.tar.gz "$PJSIP_DOWNLOAD_URL" && \
tar xzvf pjsip.tar.gz && \
rm -rf pjsip.tar.gz && \
mv pjproject-*/* ./
# Download Swig
RUN cd /sources/swig && \
curl -L -# -o swig.tar.gz "$SWIG_DOWNLOAD_URL" && \
tar xzf swig.tar.gz && \
rm -rf swig.tar.gz && \
mv swig-*/* ./
# Download OpenSSL
RUN cd /sources/openssl && \
curl -L -# -o openssl.tar.gz "$OPENSSL_DOWNLOAD_URL" && \
tar xzf openssl.tar.gz && \
rm -rf openssl.tar.gz && \
mv openssl-*/* ./
# Download Opus
RUN cd /sources/opus && \
curl -L -# -o opus.tar.gz "$OPUS_DOWNLOAD_URL" && \
tar xzf opus.tar.gz && \
rm -rf opus.tar.gz && \
mv opus-*/* ./ && \
mkdir ./jni && \
cd ./jni && \
curl -L -# -o Android.mk "$OPUS_ANDROID_MK_DOWNLOAD_URL"
# Download OpenH264
RUN cd /sources/openh264 && \
curl -L -# -o openh264.tar.gz "$OPENH264_DOWNLOAD_URL" && \
tar xzf openh264.tar.gz && \
rm -rf openh264.tar.gz && \
mv openh264-*/* ./
##############################
# Build swig, openssl, opus, openh264
##############################
RUN mkdir -p /output/openssl/ && \
mkdir -p /output/openh264/ && \
mkdir -p /output/pjsip && \
mkdir -p /output/opus
# Build opus
ADD ./build_opus.sh /usr/local/sbin/
RUN cd /usr/local/sbin && ls
RUN IFS=" " && \
for arch in $TARGET_ARCHS; \
do \
./build_opus.sh ${arch}; \
done
# Build swig
RUN cd /sources/swig && \
./configure && \
make && \
make install
# Build OpenH264
ADD ./build_openh264.sh /usr/local/sbin/
RUN cd /usr/local/sbin & ls
RUN IFS=" " && \
for arch in $TARGET_ARCHS; \
do \
./build_openh264.sh ${arch}; \
done
# Build openssl
ADD ./build_openssl.sh /usr/local/sbin/
RUN IFS=" " && \
for arch in $TARGET_ARCHS; \
do \
build_openssl.sh ${arch}; \
done
# Build pjsip
ADD ./build_pjsip.sh /usr/local/sbin/
RUN IFS=" " && \
for arch in $TARGET_ARCHS; \
do \
build_pjsip.sh ${arch}; \
done
# Dist
RUN mkdir -p /dist/android/src/main && \
mv /output/pjsip/* /dist/android/src/main && \
rm -rf /dist/android/src/main/java/org/pjsip/pjsua2/app
RUN IFS=" " && \
for arch in $TARGET_ARCHS; \
do \
mv /output/openh264/${arch}/lib/libopenh264.so /dist/android/src/main/jniLibs/${arch}/; \
done
.sh file to start the docker:
#!/bin/bash
set -e
IMAGE_NAME="react-native-pjsip-builder/android"
CONTAINER_NAME="react-native-pjsip-builder-${RANDOM}"
rm -rf ./dist/android;
mkdir -p ./dist/;
docker build -t react-native-pjsip-builder/android ./android/;
docker run --name ${CONTAINER_NAME} ${IMAGE_NAME} bin/true
docker cp ${CONTAINER_NAME}:/dist/android ./dist/android
docker rm ${CONTAINER_NAME}
You are confusing WORKDIR with cd. In docker, there is a concept called WORKDIR, which acts like cd. It changes the directory location from thereafter to all upcoming instructions. Using cd will only change the directory in that particular layer when instruction comes to the next layer the directory location will be reverted back to WORKDIR.
Hence in order to properly run you either need to use Absolute path of the script or use WORKDIR to change and then run the script.
Using Absolute Path:
RUN IFS=" " &&
for arch in $TARGET_ARCHS;
do
/usr/local/sbin/build_opus.sh ${arch};
done
Using 'WORKDIR':
WORKDIR /usr/local/sbin/
RUN IFS=" " &&
for arch in $TARGET_ARCHS;
do
./build_opus.sh ${arch};
done
Reference:
WORKDIR in docker
difference between RUN cd and WORKDIR in Dockerfile

change gradle Dockerfile to be executed as root user

I am working in gitlab and want to use gradle to build my java project, but I ran into this bug with gitlab runner: https://gitlab.com/gitlab-org/gitlab-runner/issues/2570
One comment is: I can confirm that it works in v9.1.3 but v9.2.0 is broken. Only when I use root user inside container I can proceed. That really should be fixed, because that this regression is seriously impacting security.
So my question is on which places I have to change the Dockerfile to execute as root user? https://github.com/keeganwitt/docker-gradle/blob/b0419babd3271f6c8e554fbc8bbd8dc909936763/jdk8-alpine/Dockerfile
So my idea is to change the dockerfile that it is executed as root push it to my registry and use it inside gitlab. But I am not so much into linux/docker that I know where the user is defined in the file. Maybe I am totally wrong?
build_java:
image: gradle:4.4.1-jdk8-alpine-root
stage: build_java
script:
- gradle build
artifacts:
expire_in: 1 hour # Workaround to delete artifacts after build, we only artifacts it to keep it between stages (but not after the build)
paths:
- build/
- .gradle/
Dockerfile
FROM openjdk:8-jdk-alpine
CMD ["gradle"]
ENV GRADLE_HOME /opt/gradle
ENV GRADLE_VERSION 4.4.1
ARG GRADLE_DOWNLOAD_SHA256=e7cf7d1853dfc30c1c44f571d3919eeeedef002823b66b6a988d27e919686389
RUN set -o errexit -o nounset \
&& echo "Installing build dependencies" \
&& apk add --no-cache --virtual .build-deps \
ca-certificates \
openssl \
unzip \
\
&& echo "Downloading Gradle" \
&& wget -O gradle.zip "https://services.gradle.org/distributions/gradle-${GRADLE_VERSION}-bin.zip" \
\
&& echo "Checking download hash" \
&& echo "${GRADLE_DOWNLOAD_SHA256} *gradle.zip" | sha256sum -c - \
\
&& echo "Installing Gradle" \
&& unzip gradle.zip \
&& rm gradle.zip \
&& mkdir /opt \
&& mv "gradle-${GRADLE_VERSION}" "${GRADLE_HOME}/" \
&& ln -s "${GRADLE_HOME}/bin/gradle" /usr/bin/gradle \
\
&& apk del .build-deps \
\
&& echo "Adding gradle user and group" \
&& addgroup -S -g 1000 gradle \
&& adduser -D -S -G gradle -u 1000 -s /bin/ash gradle \
&& mkdir /home/gradle/.gradle \
&& chown -R gradle:gradle /home/gradle \
\
&& echo "Symlinking root Gradle cache to gradle Gradle cache" \
&& ln -s /home/gradle/.gradle /root/.gradle
# Create Gradle volume
USER gradle
VOLUME "/home/gradle/.gradle"
WORKDIR /home/gradle
RUN set -o errexit -o nounset \
&& echo "Testing Gradle installation" \
&& gradle --version
EDIT:
Okay how to use gradle in docker after it is downloaded as image and available in gitlab.
build_java:
image: docker:dind
stage: build_java
script:
- docker images
- docker login -u _json_key -p "$(echo $GCR_SERVICE_ACCOUNT | base64 -d)" https://eu.gcr.io
- docker pull eu.gcr.io/test/gradle:4.4.1-jdk8-alpine-root
- docker images
- ??WHAT COMMAND TO CALL GRADLE BUILD??

Pipe RUN's output to ENV in Dockerfile

I have the following command in my Dockerfile:
RUN echo "\
export NODE_VERSION=$(\
curl -sL https://nodejs.org/dist/latest/ |\
tac |\
tac |\
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' |\
head -1\
)" >> /etc/bash.bashrc
RUN source /etc/bash.bashrc
The following command should store export NODE_VERSION=6.2.2 in /etc/bash.bashrc, but it's not storing anything.
This works however when I'm inside an image with bash and manually entering the following commands.
Update:
I changed back the shell from bash to the Debian/Ubuntu default dash, which is POSIX standard. I removed this line:
RUN ln -sf /bin/bash /bin/sh && ln -sf /bin/bash /bin/sh.distrib
Than I tried to add to the environment variables with export:
RUN export NODE_VERSION=$(\
curl -sL https://nodejs.org/dist/latest/ |\
tac |\
tac |\
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' |\
head -1\
)
But again, the output is missing at image creation, but works when I running the image with $ docker run --rm -it debian /bin/sh. Why?
Update 2:
Looks like the final solution should be something like this:
RUN NODE_VERSION=$( \
curl -sL https://nodejs.org/dist/latest/ | \
tac | \
tac | \
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' | \
head -1 \
) && echo $NODE_VERSION
ENV NODE_VERSION $NODE_VERSION
echo $NODE_VERSION returning 6.2.2 as it should at the execution of the Dockerfile also, but ENV NODE_VERSION $NODE_VERSION cannot read this. Is there a way to define variables globally or how can I pass the RUN's output to ENV?
Solution:
I ended up putting the node.js installation part under the same RUN command:
RUN NODE_VERSION=$( \
curl -sL https://nodejs.org/dist/latest/ | \
tac | \
tac | \
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' | \
head -1 \
) \
&& echo $NODE_VERSION \
&& curl -SLO "https://nodejs.org/dist/latest/node-v$NODE_VERSION-linux-x64.tar.xz" -o "node-v$NODE_VERSION-linux-x64.tar.xz" \
&& curl -SLO "https://nodejs.org/dist/latest/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
&& rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt
Update:
But again, the output is missing at image creation, but works when I
running the image with $ docker run --rm -it debian /bin/sh. Why?
This is because each statement (conventionally started with an uppercase verb like RUN, ADD, COPY, ENV, etc) is a brand-new intermediate container.
These intermediate containers do not share the environment (e.g. environment variables) but a Union File System. That is, only data saved in file system and those variables defined in Dockerfile (e.g. through ENV) deliver through intermediate containers. Check out this post and UnionFS Wiki if you want to know how UFS works.
If your goal is to install the latest node each time you build the image. How about having a try for nvm (Node Version Manager)?
ARG UBUNTU=16.04
# Pull base image.
FROM ubuntu:${UBUNTU}
# arguments
ARG NVM=0.33.9
ARG NODE=node
# update apt
RUN apt-get update
# Install curl
RUN apt-get install -y curl
# Set home for NVM
ENV NVM_DIR=/home/inazuma/.nvm
# Install Node.js with NVM
RUN mkdir -p ${NVM_DIR} && \
curl -o- https://raw.githubusercontent.com/creationix/nvm/v${NVM}/install.sh | bash && \
. ${NVM_DIR}/nvm.sh && \
nvm install ${NODE}
# The first following line should always be called in each intermediate container
# to gain nvm, node and npm command
. ${NVM_DIR}/nvm.sh && nvm use ${NODE} && \
npm install -g cowsay && \
cowsay "Making Docker images is really a headache!"
# Set up your PATH for nvm, node and npm command
CMD ". ${NVM_DIR}/nvm.sh && nvm use ${NODE} && bash"
Note that nvm does not persist across intermediate containers so you should use . ${NVM_DIR}/nvm.sh to set up nvm command for each new intermediate container.
NVM manages node binary locally, use nvm use ${NODE} to include node and npm into PATH. In NVM, node stands for an alias of the latest version of Node; therefore, we set NODE argument to be node (it can also be set to a string of semantic version like 5.0, 9.11.1, etc).

Resources