When I build my Dockerfile image on my Macbook M1, I begin to receive errors in regards to syslinux specifically, and if I were to comment this out I continue to receive errors such as this:
fetch http://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.13/main: UNTRUSTED signature
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.13/main: No such file or directory
fetch http://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.13/community: UNTRUSTED signature
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.13/community: No such file or directory
So I know the issue revolves around my repositories that I use so this is where I have the ENTRYPOINT say this in my Dockerfile:
ENTRYPOINT /src/aports/scripts/mkimage.sh \
--tag v3.13 \
--outdir /build \
--arch x86_64 \
--repository http://dl-cdn.alpinelinux.org/alpine/v3.13/main \
--extra-repository http://dl-cdn.alpinelinux.org/alpine/v3.13/community \
--profile iot
I would believe this would work on my M1 but it doesn't! I used other another Macbook and that builds it but why not the M1? I would greatly appreciate any help in this.
EDIT 2: Adding full Dockerfile:
# This image contains the build environment for edge appliance install ISOs
FROM alpine:3.13
# Define metadata
LABEL maintainer="this_dude#dude.net"
# Configure user
RUN addgroup root this_build
# Initialize update and upgrade on Alpine AMI
RUN apk -U upgrade
# Install dependencies
RUN apk add --no-cache \
alpine-conf \
alpine-sdk \
apk-tools \
dosfstools \
grub-efi \
mtools \
squashfs-tools \
syslinux \
xorriso
WORKDIR /src
# Clone alpine ports repository containing the iso builder
RUN git clone --depth=1 --branch v3.13.2 git://git.alpinelinux.org/aports
RUN chmod +x aports/scripts/mkimage.sh
# Include edge appliance image profile
RUN ln -sf /build/mkimg.run.sh /src/aports/scripts/mkimg.run.sh
WORKDIR /build
# Run ISO build
ENTRYPOINT /src/aports/scripts/mkimage.sh \
--tag v3.13 \
--outdir /build \
--arch x86_64 \
--repository http://dl-cdn.alpinelinux.org/alpine/v3.13/main \
--extra-repository http://dl-cdn.alpinelinux.org/alpine/v3.13/community \
--profile iot
As you can see here https://pkgs.alpinelinux.org/packages?name=syslinux the syslinux bootloader package has not support for aarch64 (M1 processors). I would suggest to use another bootloader with AMD and ARM support - for example https://pkgs.alpinelinux.org/packages?name=u-boot&branch=edge.
And don't forget to change that --arch x86_64 argument in your entrypoint to --arch aarch64 if you want to run it without errors on your M1 processor. Or just remove it to use default_arch from the sh script.
Related
I am having a very weird issue when building an armv7 docker image using docker buildx, but not when building it natively on armv7 hardware.
Here is a very simple docker image:
FROM ubuntu:20.04
ARG ARCH
RUN apt-get update && \
apt-get install -y curl wget
# Install Go
ENV GOLANG_VERSION 1.15.8
RUN set -eux; \
\
url="https://golang.org/dl/go${GOLANG_VERSION}.linux-${ARCH}.tar.gz"; \
wget -O go.tgz "$url"; \
tar -C /usr/local -xzf go.tgz; \
rm go.tgz; \
export PATH="/usr/local/go/bin:$PATH"; \
go version
I can build the image for arm64 both on macos as well on a raspberrypi just fine. No such luck when building it for armv7 tho.
I am building the image on macos using buildx as follows:
docker buildx build --platform linux/arm/v7 -t test:armv7 --build-arg ARCH=armv6l .
This fails with a certificate error when connecting to golang.org:
#7 0.378 Resolving golang.org (golang.org)... 142.250.185.113, 2a00:1450:4001:80f::2011
#7 0.448 Connecting to golang.org (golang.org)|142.250.185.113|:443... connected.
#7 0.682 ERROR: cannot verify golang.org's certificate, issued by 'CN=GTS CA 1O1,O=Google Trust Services,C=US':
#7 0.682 Unable to locally verify the issuer's authority.
#7 0.688 To connect to golang.org insecurely, use `--no-check-certificate'.
However if I build the exact same image natively on armv7 (raspberry pi 2b) it works just fine:
docker build -t test:armv7 --build-arg ARCH=armv6l .
Needless to say I am very confused why one works and the other one doesn't.
add to wget command to do not check a ssl certificate -k argument
for you
FROM ubuntu:20.04
ARG ARCH
RUN apt-get update && \
apt-get install -y curl wget
# Install Go
ENV GOLANG_VERSION 1.15.8
RUN set -eux; \
\
url="https://golang.org/dl/go${GOLANG_VERSION}.linux-${ARCH}.tar.gz"; \
wget -k -O go.tgz "$url"; \
tar -C /usr/local -xzf go.tgz; \
rm go.tgz; \
export PATH="/usr/local/go/bin:$PATH"; \
go version
best way it will be install the ca-cerificates pacage. do this in apt-get install -y curl wget ca-certificates
When I use curl --head to test my website, it returns the server information.
I followed this tutorial to hide the nginx server header.
But when I run the command yum install nginx-module-security-headers
, it returns yum: not found.
I also tried apk add nginx-module-security-headers, and it shows that the package is missing.
I have used nginx:1.17.6-alpine as my base docker image. Does anyone know how to hide the server from header under this Alpine?
I think I have an easier solution here: https://gist.github.com/hermanbanken/96f0ff298c162a522ddbba44cad31081. Big thanks to hermanbanken on Github for sharing this gist.
The idea is to create a multi stage build with the nginx alpine image to be a base for compiling the module. This turns into the following Dockerfile:
ARG VERSION=alpine
FROM nginx:${VERSION} as builder
ENV MORE_HEADERS_VERSION=0.33
ENV MORE_HEADERS_GITREPO=openresty/headers-more-nginx-module
# Download sources
RUN wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" -O nginx.tar.gz && \
wget "https://github.com/${MORE_HEADERS_GITREPO}/archive/v${MORE_HEADERS_VERSION}.tar.gz" -O extra_module.tar.gz
# For latest build deps, see https://github.com/nginxinc/docker-nginx/blob/master/mainline/alpine/Dockerfile
RUN apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl-dev \
pcre-dev \
zlib-dev \
linux-headers \
libxslt-dev \
gd-dev \
geoip-dev \
perl-dev \
libedit-dev \
mercurial \
bash \
alpine-sdk \
findutils
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN rm -rf /usr/src/nginx /usr/src/extra_module && mkdir -p /usr/src/nginx /usr/src/extra_module && \
tar -zxC /usr/src/nginx -f nginx.tar.gz && \
tar -xzC /usr/src/extra_module -f extra_module.tar.gz
WORKDIR /usr/src/nginx/nginx-${NGINX_VERSION}
# Reuse same cli arguments as the nginx:alpine image used to build
RUN CONFARGS=$(nginx -V 2>&1 | sed -n -e 's/^.*arguments: //p') && \
sh -c "./configure --with-compat $CONFARGS --add-dynamic-module=/usr/src/extra_module/*" && make modules
# Production container starts here
FROM nginx:${VERSION}
COPY --from=builder /usr/src/nginx/nginx-${NGINX_VERSION}/objs/*_module.so /etc/nginx/modules/
.... skipped inserting config files and stuff ...
# Validate the config
RUN nginx -t
Alpine repo probably doesn't have the ngx_security_headers module but, the mentioned tutorial also provides an option of using Headers More module. You should be able to install this module in your alpine distro using the command:
apk add nginx-mod-http-headers-more
Hope it helps.
Source
I found the alternate solution. The reason that it shows binary not compatible is because I have one nginx pre-installed under the target route, and it is not compatible with the header-more module I am using. That means I cannot simply install the third party library from Alpine package.
So I prepare a clean Alpine OS, and follow the GitHub repository to build Nginx from the source with additional feature. The path of build result is the prefix path you specified.
I'm writting a Dockerfile in order to create an image for a web server (a shiny server more precisely). It works well, but it depends on a huge database folder (db/) that it is not distributed with the package, so I want to do all this preprocessing while creating the image, by running the corresponding script in the Dockerfile.
I expected this to be simple, but I'm struggling figuring out where my files are being located within the image.
This repo has the following structure:
Dockerfile
preprocessing_files
configuration_files
app/
application_files
db/
processed_files
So that app/db/ does not exist, but is created and filled with files when preprocessing_files are run.
The Dockerfile is the following:
# Install R version 3.6
FROM r-base:3.6.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxml2-dev \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
RUN R -e "install.packages(c('shiny', 'flexdashboard','rmarkdown','tidyverse','plotly','DT','drc','gridExtra','fitdistrplus'), repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
COPY /app/db /srv/shiny-server/app/
# Make the ShinyApp available at port 80
EXPOSE 80
CMD ["/usr/bin/shiny-server"]
This above file works well if preprocessing_files are run in advance, so app/application_files can successfully read app/db/processed_files. How could this script be run in the Dockerfile? To me the intuitive solution would be simply to write:
RUN bash -c "preprocessing.sh"
Before the ADD instruction, but then preprocessing_files are not found. If the above instruction is written below ADD and also WORKDIR app/, the same error happens. I cannot understand why.
You cannot execute code on the host machine from Dockerfile. RUN command executes inside the container being built. You can:
Copy preprocessing_files inside docker container and run preprocessing.sh inside the container (this would increase size of the container)
Create a makefile/build.sh script which launches preprocessing.sh before executing docker build
When I try to build an image for my application, an image that relies upon buildkit, I receive an error: failed to dial gRPC: unable to upgrade to h2c, received 403
I can build standard docker images, but if it relies on Buildkit, I get errors
Specifically, the command that fails is:
docker build --ssh default --no-cache -t worker $BITBUCKET_CLONE_DIR/worker
My bitbucket-pipelines.yml is as follows, the first two docker build commands work, and the images are generated, however the third, that relies on buildkit does not.
image: docker:stable
pipelines:
default:
- step:
name: build
size: 2x
script:
- docker build -t alpine-base $BITBUCKET_CLONE_DIR/supporting/alpine-base
- docker build -t composer-xv:latest $BITBUCKET_CLONE_DIR/supporting/composer-xv
- apk add openssh-client
- eval `ssh-agent`
- export DOCKER_BUILDKIT=1
- docker build --ssh default --no-cache -t worker $BITBUCKET_CLONE_DIR/worker
- docker images
services:
- docker
caches:
- docker
My Dockerfile is as follows:
# syntax=docker/dockerfile:1.0.0-experimental
FROM composer:1.7 as phpdep
COPY application/database/ database/
COPY application/composer.json composer.json
COPY application/composer.lock composer.lock
# Install PHP dependencies in 'vendor'
RUN --mount=type=ssh composer install \
--ignore-platform-reqs \
--no-dev \
--no-interaction \
--no-plugins \
--no-scripts \
--prefer-dist
#
# Final image build stage
#
FROM alpine-base:latest as final
ADD application /app/application
COPY --from=phpdep /app/vendor/ /app/application/vendor/
ADD entrypoint.sh /entrypoint.sh
RUN \
apk update && \
apk upgrade && \
apk add \
php7 php7-mysqli php7-mcrypt php7-gd \
php7-curl php7-xml php7-bcmath php7-mbstring \
php7-zip php7-bz2 ca-certificates php7-openssl php7-zlib \
php7-bcmath php7-dom php7-json php7-phar php7-pdo_mysql php7-ctype \
php7-session php7-fileinfo php7-xmlwriter php7-tokenizer php7-soap \
php7-simplexml && \
cd /app/application && \
cp .env.example .env && \
chown nobody:nobody /app/application/.env && \
sed -i 's/;openssl.capath=/openssl.capath=\/etc\/ssl\/certs/' /etc/php7/php.ini && \
sed -i 's/memory_limit = 128M/memory_limit = 1024M/' /etc/php7/php.ini && \
apk del --purge curl wget && \
mkdir -p /var/log/workers && \
mkdir -p /run/php && \
echo "export PS1='WORKER \h:\w\$ '" >> /etc/profile
COPY files/logrotate.d/ /etc/logrotate.d/
CMD ["/entrypoint.sh"]
Bitbucket pipelines don't support DOCKER_BUILDKIT, it seems, see: https://jira.atlassian.com/browse/BCLOUD-17590?focusedCommentId=3019597&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-3019597 . They say they are waiting for this; https://github.com/moby/buildkit/pull/2723 to be fixed...
You could try again as, since July 2022, you have:
Announcing support for Docker BuildKit in Bitbucket Pipelines
(Jayant Gawali, Atlassian Team)
We are happy to announce that one of the top voted features for Bitbucket Pipelines, Docker BuildKit is now available. You can now build Docker images with the BuildKit utility.
With BuildKit you can take advantage of the various features it provides like:
Performance: BuildKit uses parallelism and caching internally to build images faster.
Secrets: Mount secrets and build images safely.
Cache: Mount caches to save re-downloading all external dependencies every time.
SSH: Mount SSH Keys to build images.
Configuring your bitbucket-pipelines.yaml
BuildKit is now available with the Docker Daemon service.
It is not enabled by default and can be enabled by setting the environment variable DOCKER_BUILDKIT=1 in the pipelines configuration.
pipelines:
default:
- step:
script:
- export DOCKER_BUILDKIT=1
- docker build --secret id=mysecret,src=mysecret.txt .
services:
- docker
To learn more about how to set it up please refer to the support documentation and for information on Docker Buildkit, visit: Docker Docs ? Build images with BuildKit.
Please note:
Use multi-stage builds to utilise parallelism.
Caching is not shared across different builds and it’s limited to the build running on the same docker node where the build runs.
With BuildKit, secrets can be mounted securely as shown above.
For restrictions and limitations please refer to the restrictions section of our support documentation.
we are trying to host tensorflow object-detection model on GCP.
we have maintain below directory structure before running "gcloud app deploy".
For you convenient I am attaching the configuration files with the question.
Wer are getting deployment error which is mentioned below. Please suggest a solution.
+root
+object_detection/
+slim/
+env
+app.yaml
+Dockerfile
+requirement.txt
+index.html
+test.py
Dockerfile
FROM gcr.io/google-appengine/python
LABEL python_version=python2.7
RUN virtualenv --no-download /env -p python2.7
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Various Python and C/build deps
RUN apt-get update && apt-get install -y \
wget \
build-essential \
cmake \
git \
unzip \
pkg-config \
python-dev \
python-opencv \
libopencv-dev \
libav-tools \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libjasper-dev \
libgtk2.0-dev \
python-numpy \
python-pycurl \
libatlas-base-dev \
gfortran \
webp \
python-opencv \
qt5-default \
libvtk6-dev \
zlib1g-dev \
protobuf-compiler \
python-pil python-lxml \
python-tk
# Install Open CV - Warning, this takes absolutely forever
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
RUN protoc /app/object_detection/protos/*.proto --python_out=/app/.
RUN export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/app/slim
CMD exec gunicorn -b :$PORT UploadTest:app
requirement.txt
Flask==0.12.2
gunicorn==19.7.1
numpy==1.13.1
requests==0.11.1
bs4==0.0.1
nltk==3.2.1
pymysql==0.7.2
xlsxwriter==0.8.5
Pillow==4.2.1
pytesseract==0.1
opencv-python>=3.0
matplotlib==2.0.2
tensorflow==1.3.0
lxml==4.0.0
app.yaml
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT UploadTest:app
threadsafe: true
runtime_config:
python_version: 2
After all this i am seeting up the google cloud environment with gcloud init
And then start command gcloud app deploy
I am getting below error while deploying the solution.
Error:
Step 10/12 : RUN protoc /app/object_detection/protos/*.proto --python_out=/app/.
---> Running in 9b3ec9c43c2d
/app/object_detection/protos/anchor_generator.proto: File does not reside within any path specified using --proto_path (or -I). You must specify a --proto_path which encompasses this file. Note that the proto_path must be an exact prefix of the .proto file names -- protoc is too dumb to figure out when two paths (e.g. absolute and relative) are equivalent (it's harder than you think).
The command '/bin/sh -c protoc /app/object_detection/protos/*.proto --python_out=/app/.' returned a non-zero code: 1
ERROR
ERROR: build step "gcr.io/cloud-builders/docker#sha256:a4a83be9b2fb61452e864ecf1bcfca99d1845499ef9500ae2905cea0ea593769" failed: exit status 1
----------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.app.deploy) Cloud build failed. Check logs at https://console.cloud.google.com/gcr/builds/4dba3217-b7d6-4341-b28e-09a9dad45c18?
There is a directory "object_detection/protos" present and all necessary files are present there. Still getting deployment error. Please suggest where to change in dockerfile to deploy it successfully.
My assumption: GCP is not able to figure out the path of the protc file. May be I have to alter something in Docketfile. But not able to figure out the solution. Please answer.
NB: This setup is running well in local machine. But not working in GCP