Trying to automate arm64 build on Docker Hub - docker

From what I understood, it is possible to build an arm64v8 image on the Docker Hub infrastructure (that use amd64). According to this thread it can be done using Qemu.
So I added a pre_build hook:
#!/bin/bash
docker run --rm --privileged multiarch/qemu-user-static:register --reset
The Qemu binaries are also downloaded inside the container:
FROM alpine AS builder
RUN apk update
RUN apk add curl
WORKDIR /qemu
# downloaded here...
RUN curl -L https://github.com/balena-io/qemu/releases/download/v3.0.0%2Bresin/qemu-3.0.0+resin-arm.tar.gz | tar zxvf - -C . && mv qemu-3.0.0+resin-arm/qemu-arm-static .
FROM area51/gdal:arm64v8-2.2.3
# ...then added here
COPY --from=builder /qemu/qemu-arm-static /usr/bin
RUN apt-get update
RUN apt-get install -y libgdal-dev python3-pip libspatialindex-dev unar bc
ENV CPLUS_INCLUDE_PATH=/usr/include/gdal
ENV C_INCLUDE_PATH=/usr/include/gdal
ADD ./requirements.txt .
RUN pip3 install -r requirements.txt
RUN mkdir /code
ADD . /code/
WORKDIR /code
CMD python3.5 server.py
EXPOSE 8080
Unfortunatly, it doesn't works:
Cloning into '.'...
Warning: Permanently added the RSA host key for IP address '140.82.114.4' to the list of known hosts.
Switched to a new branch 'auto-build'
Executing pre_build hook...
Unable to find image 'multiarch/qemu-user-static:register' locally
register: Pulling from multiarch/qemu-user-static
bdbbaa22dec6: Pulling fs layer
42399a41a764: Pulling fs layer
ed8a5179ae11: Pulling fs layer
1ec39da9c97d: Pulling fs layer
1ec39da9c97d: Waiting
42399a41a764: Verifying Checksum
42399a41a764: Download complete
bdbbaa22dec6: Verifying Checksum
bdbbaa22dec6: Download complete
ed8a5179ae11: Verifying Checksum
ed8a5179ae11: Download complete
1ec39da9c97d: Verifying Checksum
1ec39da9c97d: Download complete
bdbbaa22dec6: Pull complete
42399a41a764: Pull complete
ed8a5179ae11: Pull complete
1ec39da9c97d: Pull complete
Digest: sha256:7502ce31890ab5da0ab6e5e5edc1e2563caa45da1c5d76aaf7dc4252aea926dc
Status: Downloaded newer image for multiarch/qemu-user-static:register
Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha
Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm
Setting /usr/bin/qemu-armeb-static as binfmt interpreter for armeb
Setting /usr/bin/qemu-sparc-static as binfmt interpreter for sparc
Setting /usr/bin/qemu-sparc32plus-static as binfmt interpreter for sparc32plus
Setting /usr/bin/qemu-sparc64-static as binfmt interpreter for sparc64
Setting /usr/bin/qemu-ppc-static as binfmt interpreter for ppc
Setting /usr/bin/qemu-ppc64-static as binfmt interpreter for ppc64
Setting /usr/bin/qemu-ppc64le-static as binfmt interpreter for ppc64le
Setting /usr/bin/qemu-m68k-static as binfmt interpreter for m68k
Setting /usr/bin/qemu-mips-static as binfmt interpreter for mips
Setting /usr/bin/qemu-mipsel-static as binfmt interpreter for mipsel
Setting /usr/bin/qemu-mipsn32-static as binfmt interpreter for mipsn32
Setting /usr/bin/qemu-mipsn32el-static as binfmt interpreter for mipsn32el
Setting /usr/bin/qemu-mips64-static as binfmt interpreter for mips64
Setting /usr/bin/qemu-mips64el-static as binfmt interpreter for mips64el
Setting /usr/bin/qemu-sh4-static as binfmt interpreter for sh4
Setting /usr/bin/qemu-sh4eb-static as binfmt interpreter for sh4eb
Setting /usr/bin/qemu-s390x-static as binfmt interpreter for s390x
Setting /usr/bin/qemu-aarch64-static as binfmt interpreter for aarch64
Setting /usr/bin/qemu-aarch64_be-static as binfmt interpreter for aarch64_be
Setting /usr/bin/qemu-hppa-static as binfmt interpreter for hppa
Setting /usr/bin/qemu-riscv32-static as binfmt interpreter for riscv32
Setting /usr/bin/qemu-riscv64-static as binfmt interpreter for riscv64
Setting /usr/bin/qemu-xtensa-static as binfmt interpreter for xtensa
Setting /usr/bin/qemu-xtensaeb-static as binfmt interpreter for xtensaeb
Setting /usr/bin/qemu-microblaze-static as binfmt interpreter for microblaze
Setting /usr/bin/qemu-microblazeel-static as binfmt interpreter for microblazeel
Setting /usr/bin/qemu-or1k-static as binfmt interpreter for or1k
KernelVersion: 4.4.0-1060-aws
Components: [{u'Version': u'18.03.1-ee-3', u'Name': u'Engine', u'Details': {u'KernelVersion': u'4.4.0-1060-aws', u'Os': u'linux', u'BuildTime': u'2018-08-30T18:42:30.000000000+00:00', u'ApiVersion': u'1.37', u'MinAPIVersion': u'1.12', u'GitCommit': u'b9a5c95', u'Arch': u'amd64', u'Experimental': u'false', u'GoVersion': u'go1.10.2'}}]
Arch: amd64
BuildTime: 2018-08-30T18:42:30.000000000+00:00
ApiVersion: 1.37
Platform: {u'Name': u''}
Version: 18.03.1-ee-3
MinAPIVersion: 1.12
GitCommit: b9a5c95
Os: linux
GoVersion: go1.10.2
Starting build of index.docker.io/cl00e9ment/open-elevation:latest...
Step 1/18 : FROM alpine AS builder
---> e7d92cdc71fe
Step 2/18 : RUN apk update
---> Running in a62df65e92ac
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
v3.11.3-6-gb1cd1b7acf [http://dl-cdn.alpinelinux.org/alpine/v3.11/main]
v3.11.3-5-gb26b362c4a [http://dl-cdn.alpinelinux.org/alpine/v3.11/community]
OK: 11259 distinct packages available
Removing intermediate container a62df65e92ac
---> 9decee1216df
Step 3/18 : RUN apk add curl
---> Running in 440f41edd63d
(1/4) Installing ca-certificates (20191127-r0)
(2/4) Installing nghttp2-libs (1.40.0-r0)
(3/4) Installing libcurl (7.67.0-r0)
(4/4) Installing curl (7.67.0-r0)
Executing busybox-1.31.1-r9.trigger
Executing ca-certificates-20191127-r0.trigger
OK: 7 MiB in 18 packages
Removing intermediate container 440f41edd63d
---> 54c70441e6d3
Step 4/18 : WORKDIR /qemu
Removing intermediate container 58b03a58671b
---> 89c6e32b5854
Step 5/18 : RUN curl -L https://github.com/balena-io/qemu/releases/download/v3.0.0%2Bresin/qemu-3.0.0+resin-arm.tar.gz | tar zxvf - -C . && mv qemu-3.0.0+resin-arm/qemu-arm-static .
---> Running in 11696855e374
[91m % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0[0m
[91m 100 619 0 619 0 0 3327 0 --:--:-- --:--:-- --:--:-- 3917
[0m
qemu-3.0.0+resin-arm/
[91m 19 1678k 19 321k 0 0 846k 0 0:00:01 --:--:-- 0:00:01 846k[0m
[91m 100 1678k 100 1678k 0 0 2535k 0 --:--:-- --:--:-- --:--:-- 4828k
[0m
qemu-3.0.0+resin-arm/qemu-arm-static
Removing intermediate container 11696855e374
---> 80668e34eb37
Step 6/18 : FROM area51/gdal:arm64v8-2.2.3
---> 4edbfeef8f1a
Step 7/18 : COPY --from=builder /qemu/qemu-arm-static /usr/bin
---> 91c196da9280
Step 8/18 : RUN apt-get update
---> Running in 37c97a8903f3
[91mstandard_init_linux.go:190: exec user process caused "no such file or directory"
[0m
Removing intermediate container 37c97a8903f3
The command '/bin/sh -c apt-get update' returned a non-zero code: 1
The error:
standard_init_linux.go:190: exec user process caused "no such file or directory"
Looks like that one:
standard_init_linux.go:190: exec user process caused "exec format error"
That I'm starting to be used to see and means that there is an architecture problem. Does the first one mean the same thing?
If there is again an architecture problem, what I'm missing?

I was able to fix the "no such file or directory" error using the solution from this article...
https://stackoverflow.com/a/56063679/1194731

Related

Dockerfile (windows): Chocolatey giving "not recognized as the name of a cmdlet" error after nvm install

I'm trying to install NVM in my container by first installing chocolatey. The issue I'm running into is that when building the container after installing nvm through chocolatey when I try to run the "nvm" command to test out if it's been installed I get a nvm : The term 'nvm' is not recognized as the name of a cmdlet error.
My dockerfile is as follows:
# escape=`
#Use the latest Windows Server Core 2019 image.
FROM mcr.microsoft.com/windows/servercore:ltsc2019
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
#Adding Chocolatey (a windows package manager)
RUN powershell Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
RUN powershell Get-ChildItem Env:
#Using Chocolatey to install nvm (Node Version Manager)
RUN choco install -y nvm
RUN powershell Import-Module C:\ProgramData\chocolatey\helpers\chocolateyProfile.psm1
RUN powershell refreshenv
RUN powershell nvm
ENTRYPOINT powershell
Here is the output of my docker build -t dockeragent:latest --no-cache . cmd
PS C:\docker\dockeragent> docker build -t dockeragent:latest --no-cache .
Sending build context to Docker daemon 751.1MB
Step 1/9 : FROM mcr.microsoft.com/windows/servercore:ltsc2019
---> e795f3f8aa80
Step 2/9 : SHELL ["cmd", "/S", "/C"]
---> Running in d7dc5ed6ce89
Removing intermediate container d7dc5ed6ce89
---> c7dc93b631eb
Step 3/9 : RUN powershell Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
---> Running in 369cfe083374
Forcing web requests to allow TLS v1.2 (Required for requests to Chocolatey.org)
Getting latest version of the Chocolatey package for download.
Not using proxy.
Getting Chocolatey from https://community.chocolatey.org/api/v2/package/chocolatey/1.2.0.
Downloading https://community.chocolatey.org/api/v2/package/chocolatey/1.2.0 to C:\Users\ContainerAdministrator\AppData\Local\Temp\chocolatey\chocoInstall\chocolatey.zip
Not using proxy.
Extracting C:\Users\ContainerAdministrator\AppData\Local\Temp\chocolatey\chocoInstall\chocolatey.zip to C:\Users\ContainerAdministrator\AppData\Local\Temp\chocolatey\chocoInstall
Installing Chocolatey on the local machine
Creating ChocolateyInstall as an environment variable (targeting 'Machine')
Setting ChocolateyInstall to 'C:\ProgramData\chocolatey'
WARNING: It's very likely you will need to close and reopen your shell
before you can use choco.
Restricting write permissions to Administrators
We are setting up the Chocolatey package repository.
The packages themselves go to 'C:\ProgramData\chocolatey\lib'
(i.e. C:\ProgramData\chocolatey\lib\yourPackageName).
A shim file for the command line goes to 'C:\ProgramData\chocolatey\bin'
and points to an executable in 'C:\ProgramData\chocolatey\lib\yourPackageName'.
Creating Chocolatey folders if they do not already exist.
WARNING: You can safely ignore errors related to missing log files when
upgrading from a version of Chocolatey less than 0.9.9.
'Batch file could not be found' is also safe to ignore.
'The system cannot find the file specified' - also safe.
chocolatey.nupkg file not installed in lib.
Attempting to locate it from bootstrapper.
PATH environment variable does not have C:\ProgramData\chocolatey\bin in it. Adding...
WARNING: Not setting tab completion: Profile file does not exist at
'C:\Users\ContainerAdministrator\Documents\WindowsPowerShell\Microsoft.PowerShe
ll_profile.ps1'.
Chocolatey (choco.exe) is now ready.
You can call choco from anywhere, command line or powershell by typing choco.
Run choco /? for a list of functions.
You may need to shut down and restart powershell and/or consoles
first prior to using choco.
Ensuring Chocolatey commands are on the path
Ensuring chocolatey.nupkg is in the lib folder
Removing intermediate container 369cfe083374
---> 6d782b624394
Step 4/9 : RUN powershell Get-ChildItem Env:
---> Running in 66fa6d3c8dc1
Name Value
---- -----
ALLUSERSPROFILE C:\ProgramData
APPDATA C:\Users\ContainerAdministrator\AppData\Roaming
ChocolateyInstall C:\ProgramData\chocolatey
ChocolateyLastPathUpdate 133118979019612509
CommonProgramFiles C:\Program Files\Common Files
CommonProgramFiles(x86) C:\Program Files (x86)\Common Files
CommonProgramW6432 C:\Program Files\Common Files
COMPUTERNAME 66FA6D3C8DC1
ComSpec C:\Windows\system32\cmd.exe
DriverData C:\Windows\System32\Drivers\DriverData
LOCALAPPDATA C:\Users\ContainerAdministrator\AppData\Local
NUMBER_OF_PROCESSORS 4
OS Windows_NT
Path C:\Windows\system32;C:\Windows;C:\Windows\Sys...
PATHEXT .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;....
PROCESSOR_ARCHITECTURE AMD64
PROCESSOR_IDENTIFIER Intel64 Family 6 Model 62 Stepping 4, Genuine...
PROCESSOR_LEVEL 6
PROCESSOR_REVISION 3e04
ProgramData C:\ProgramData
ProgramFiles C:\Program Files
ProgramFiles(x86) C:\Program Files (x86)
ProgramW6432 C:\Program Files
PROMPT $P$G
PSModulePath C:\Users\ContainerAdministrator\Documents\Win...
PUBLIC C:\Users\Public
SystemDrive C:
SystemRoot C:\Windows
TEMP C:\Users\ContainerAdministrator\AppData\Local...
TMP C:\Users\ContainerAdministrator\AppData\Local...
USERDOMAIN User Manager
USERNAME ContainerAdministrator
USERPROFILE C:\Users\ContainerAdministrator
windir C:\Windows
Removing intermediate container 66fa6d3c8dc1
---> b4ecb9d7464b
Step 5/9 : RUN choco install -y nvm
---> Running in 19f770d7871d
Chocolatey v1.2.0
Installing the following packages:
nvm
By installing, you accept licenses for the packages.
Progress: Downloading nvm.install 1.1.9... 100%
Progress: Downloading nvm 1.1.9... 100%
nvm.install v1.1.9 [Approved]
nvm.install package files install completed. Performing other installation steps.
Downloading nvm.install
from 'https://github.com/coreybutler/nvm-windows/releases/download/1.1.9/nvm-setup.zip'
Progress: 100% - Completed download of C:\Users\ContainerAdministrator\AppData\Local\Temp\chocolatey\nvm.install\1.1.9\nvm-setup.zip (4.14 MB).
Download of nvm-setup.zip (4.14 MB) completed.
Hashes match.
Extracting C:\Users\ContainerAdministrator\AppData\Local\Temp\chocolatey\nvm.install\1.1.9\nvm-setup.zip to C:\ProgramData\chocolatey\lib\nvm.install\tools...
C:\ProgramData\chocolatey\lib\nvm.install\tools
C:\ProgramData\chocolatey\lib\nvm.install\tools\nvm-setup.exe.ignore
The install of nvm.install was successful.
Software installed to 'C:\ProgramData\chocolatey\lib\nvm.install\tools'
nvm v1.1.9 [Approved]
nvm package files install completed. Performing other installation steps.
The install of nvm was successful.
Software installed to 'C:\ProgramData\chocolatey\lib\nvm'
Chocolatey installed 2/2 packages.
See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
Removing intermediate container 19f770d7871d
---> f8b42ae67241
Step 6/9 : RUN powershell Import-Module C:\ProgramData\chocolatey\helpers\chocolateyProfile.psm1
---> Running in 26740393fea3
Removing intermediate container 26740393fea3
---> 00698d7e89a9
Step 7/9 : RUN powershell refreshenv
---> Running in b2b437ce5170
Refreshing environment variables from registry for cmd.exe. Please wait...Finished..
Removing intermediate container b2b437ce5170
---> 00e6e3e82e62
Step 8/9 : RUN powershell nvm
---> Running in d23b3bd9f50f
nvm : The term 'nvm' is not recognized as the name of a cmdlet, function,
script file, or operable program. Check the spelling of the name, or if a path
was included, verify that the path is correct and try again.
At line:1 char:1
+ nvm
+ ~~~
+ CategoryInfo : ObjectNotFound: (nvm:String) [], CommandNotFound
Exception
+ FullyQualifiedErrorId : CommandNotFoundException
The command 'cmd /S /C powershell nvm' returned a non-zero code: 1
My host environment is based on a Manage Mirantis Container Cloud cluster (Mirantis Inc., v1.9.0)
and this is the output of my docker info command
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker Application (Docker Inc., v0.8.0)
cluster: Manage Mirantis Container Cloud clusters (Mirantis Inc., v1.9.0)
registry: Manage Docker registries (Docker Inc., 0.1.0)

Install wget in Docker in Codebuild

I'm using the serverless framework to build an application that is using a docker image. In the Dockerfile I have this command
RUN yum install wget -y
RUN wget https://julialang-s3.julialang.org/bin/linux/x64/1.8/julia-1.8.2-linux-x86_64.tar.gz
which works fine locally (windows) and on an Linux EC2 however when I run this through my build on codebuild (Image: aws/codebuild/amazonlinux2-x86_64-standard:4.0 with runtime-versions: docker 20) I get an error
Step 12/19 : RUN yum install wget -y
---> Running in f68d5e0607c3
Resolving Dependencies
--> Running transaction check
---> Package wget.x86_64 0:1.14-18.amzn2.1 will be installed
--> Processing Dependency: libidn.so.11(LIBIDN_1.0)(64bit) for package: wget-1.14-18.amzn2.1.x86_64
--> Processing Dependency: libidn.so.11()(64bit) for package: wget-1.14-18.amzn2.1.x86_64
--> Running transaction check
---> Package libidn.x86_64 0:1.28-4.amzn2.0.2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
wget x86_64 1.14-18.amzn2.1 amzn2-core 547 k
Installing for dependencies:
libidn x86_64 1.28-4.amzn2.0.2 amzn2-core 209 k
Transaction Summary
================================================================================
Install 1 Package (+1 Dependent package)
Total download size: 757 k
Installed size: 2.6 M
Downloading packages:
--------------------------------------------------------------------------------
Total 7.1 MB/s | 757 kB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : libidn-1.28-4.amzn2.0.2.x86_64 1/2
Installing : wget-1.14-18.amzn2.1.x86_64 2/2
Rpmdb checksum is invalid: dCDPT(pkg checksums): libidn.x86_64 0:1.28-4.amzn2.0.2 - u
The command '/bin/sh -c yum install wget -y' returned a non-zero code: 1

Error making a docker image in a raspberry pi

I'm trying to make my own image of the random scheduler in a Raspberry Pi Kubernetes cluster, but when I use the command
docker build -t angel96eur/marton-randomscheduler .
I get this:
Sending build context to Docker daemon 185.9kB
Step 1/13 : FROM golang:1.11-alpine as backend
---> 2bf7a3ec2cd3
Step 2/13 : RUN apk add --update --no-cache bash ca-certificates curl git make tzdata
---> Using cache
---> 5e5d9d12a87e
Step 3/13 : RUN mkdir -p /go/src/github.com/martonsereg/scheduler
---> Using cache
---> 98179cd910c6
Step 4/13 : ADD Gopkg.* Makefile /go/src/github.com/martonsereg/scheduler/
---> Using cache
---> 70c615ff07f6
Step 5/13 : WORKDIR /go/src/github.com/martonsereg/scheduler
---> Using cache
---> 7cdb09255a20
Step 6/13 : RUN make vendor
---> Running in 2f0555b065c7
curl https://raw.githubusercontent.com/golang/dep/master/install.sh | INSTALL_DIRECTORY=bin DEP_RELEASE_TAG=v0.5.0 sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5230 100 5230 0 0 14487 0 --:--:-- --:--:-- --:--:-- 14487
ARCH = arm
OS = linux
Will install into bin
Release Tag = v0.5.0
Fetching https://github.com/golang/dep/releases/tag/v0.5.0..
Fetching https://github.com/golang/dep/releases/download/v0.5.0/dep-linux-arm..
Request failed with code 404
make: *** [Makefile:37: bin/dep-0.5.0] Error 1
The command '/bin/sh -c make vendor' returned a non-zero code: 2
Where the error could be?
Your logs show the release download failed while building your container:
Release Tag = v0.5.0
Fetching https://github.com/golang/dep/releases/tag/v0.5.0..
Fetching https://github.com/golang/dep/releases/download/v0.5.0/dep-linux-arm..
Request failed with code 404
Looking at the release assets, golang/dep repo v0.5.0 doesn't have a dep-linux-arm release asset. The closest one that has an arm version seems to bev0.5.1 so you might want to change your DEP_RELEASE_TAG to that.

Docker hub automated build falling with missing variable

I have been set the "BUILD ENVIRONMENT VARIABLES" with the necessary variable, and added the hooks/build with:
#! /bin/bash
docker build \
--build-arg HBASE_VERSION="${HBASE_VERSION}" \
-f "${DOCKERFILE_PATH}" \
-t "${IMAGE_NAME}" .
Not are passing in build process, take a look in the log output:
Building in Docker Cloud's infrastructure...
Cloning into '.'...
Warning: Permanently added the RSA host key for IP address '192.30.253.113' to the list of known hosts.
Reset branch 'develop'
Your branch is up-to-date with 'origin/develop'.
KernelVersion: 4.4.0-1060-aws
Components: [{u'Version': u'18.03.1-ee-3', u'Name': u'Engine', u'Details': {u'KernelVersion': u'4.4.0-1060-aws', u'Os': u'linux', u'BuildTime': u'2018-08-30T18:42:30.000000000+00:00', u'ApiVersion': u'1.37', u'MinAPIVersion': u'1.12', u'GitCommit': u'b9a5c95', u'Arch': u'amd64', u'Experimental': u'false', u'GoVersion': u'go1.10.2'}}]
Arch: amd64
BuildTime: 2018-08-30T18:42:30.000000000+00:00
ApiVersion: 1.37
Platform: {u'Name': u''}
Version: 18.03.1-ee-3
MinAPIVersion: 1.12
GitCommit: b9a5c95
Os: linux
GoVersion: go1.10.2
Starting build of index.docker.io/rowupper/hbase-base:1.4.9...
Step 1/9 : FROM openjdk:8-jre-alpine3.9
---> b76bbdb2809f
Step 2/9 : RUN apk add --no-cache wget bash perl
---> Running in 50cf82a30723
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
Executing busybox-1.29.3-r10.trigger
OK: 130 MiB in 60 packages
Removing intermediate container 50cf82a30723
---> 108b5b9b6569
Step 3/9 : ARG HBASE_VERSION
---> Running in 5407a0bcbf60
Removing intermediate container 5407a0bcbf60
---> ea35e0967933
Step 4/9 : ENV HBASE_HOME=/usr/local/hbase HBASE_CONF_DIR=/etc/hbase PATH=${HBASE_HOME}/bin:$PATH
---> Running in 3a74e814acc8
Removing intermediate container 3a74e814acc8
---> 7a289348ba9b
Step 5/9 : WORKDIR $HBASE_HOME
Removing intermediate container e842d4658bf1
---> a6fede2510ec
Step 6/9 : RUN wget -O - https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz | tar -xz --strip-components=1 --no-same-owner --no-same-permissions
---> Running in 39b75bc77c5a
--2019-03-19 18:46:05-- https://archive.apache.org/dist/hbase//hbase--bin.tar.gz
Resolving archive.apache.org... 163.172.17.199
Connecting to archive.apache.org|163.172.17.199|:443...
connected.
HTTP request sent, awaiting response...
404 Not Found
2019-03-19 18:46:06 ERROR 404: Not Found.
tar: invalid magic
tar: short read
Removing intermediate container 39b75bc77c5a
The command '/bin/sh -c wget -O - https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz | tar -xz --strip-components=1 --no-same-owner --no-same-permissions' returned a non-zero code: 1
Is visible that the variable is missing, what I need do to solve this issue?
Could you try ARG and ENV, like
ARG HBASE_HOME="default_value"
ENV HBASE_HOME="$HBASE_HOME"
in your Dockerfile
build-arg VALUE should override the "default_value".
The hooks/build file need living in the same directory of Dockerfile.
My project has several subfolders, each folder with a Dockerfile.

Problems running kapacitor localinstall inside dockerfile

I am trying to install Kapacitor over Cent-OS base, but, I am facing problems with executing the localinstall command (or so I think) when I build the dockerfile.
My dockerfile is as follows:
FROM centos-base:7
ENV CONFIG_HOME /usr/local/bin
RUN curl -O https://dl.influxdata.com/kapacitor/releases/kapacitor-0.13.1.x86_64.rpm
RUN yum localinstall kapacitor-0.13.1.x86_64.rpm
COPY kapacitor.conf $CONFIG_HOME
ENTRYPOINT["/bin/bash"]
When I build it, I get the following response:
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM centos-base:7
---> 9ab68a0dd16a
Step 2 : ENV CONFIG_HOME /usr/local/bin
---> Running in ef5b7206e55d
---> 7c1b42d279db
Removing intermediate container ef5b7206e55d
Step 3 : RUN curl -O https://dl.influxdata.com/kapacitor/releases/kapacitor-0.13.1.x86_64.rpm
---> Running in 681bb29474f9
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10.8M 100 10.8M 0 0 123k 0 0:01:29 0:01:29 --:--:-- 224k
---> 99b4e77c89f2
Removing intermediate container 681bb29474f9
Step 4 : RUN yum localinstall kapacitor-0.13.1.x86_64.rpm
---> Running in d67ad03f4830
Loaded plugins: fastestmirror, ovl
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Examining kapacitor-0.13.1.x86_64.rpm: kapacitor-0.13.1-1.x86_64
Marking kapacitor-0.13.1.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package kapacitor.x86_64 0:0.13.1-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
kapacitor x86_64 0.13.1-1 /kapacitor-0.13.1.x86_64 41 M
Transaction Summary
================================================================================
Install 1 Package
Total size: 41 M
Installed size: 41 M
Is this ok [y/d/N]: Exiting on user command
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2016-08-31.04-00.gvfpqf.yumtx
The command '/bin/sh -c yum localinstall kapacitor-0.13.1.x86_64.rpm' returned a non-zero code: 1
Where am I going wrong? Can't I execute a localinstall inside Dockerfile? Thanks!
Replace
RUN yum localinstall kapacitor-0.13.1.x86_64.rpm
with
RUN yum -y localinstall kapacitor-0.13.1.x86_64.rpm

Resources