Dockerfile - RUN unable to execute binary available in environment PATH - docker

Creating a dockerfile to install dependency binary files:
FROM alpine
RUN apk update \
&& apk add ca-certificates wget \
&& update-ca-certificates
RUN mkdir -p /opt/nodejs \
&& cd /opt/nodejs \
&& wget -qO- https://nodejs.org/dist/v8.9.1/node-v8.9.1-linux-x64.tar.gz | tar xvz --strip-components=1
RUN chmod +x /opt/nodejs/bin/*
ENV PATH="/opt/nodejs/bin:${PATH}"
RUN which node
RUN node --version
which node correctly identifies the node binary from $PATH, as $PATH is modified by the ENV command before it. However, RUN node --version is not able to locate the binary.
The image build logs show:
Step 11 : ENV PATH "/opt/nodejs/bin:${PATH}"
---> Using cache
---> 7dc04c05007f
Step 12 : RUN which node
---> Running in deeaf8e9fe09
/opt/nodejs/bin/node
---> 074820b1b9b5
Step 13 : RUN node --version
---> Running in 6f7eabd95e90
/bin/sh: node: not found
The command '/bin/sh -c node --version' returned a non-zero code: 127
What is proper way to invoke installed binaries during the image build process ?
Notes:
I have also tried linking binaries to /bin, but sh still can't find them in RUN.
Docker version 1.12.1

The version of node you installed has dependencies on libraries that are not included in the alpine base image. It also was likely linked against glibc instead of musl.
/ # apk add file
(1/2) Installing libmagic (5.28-r0)
(2/2) Installing file (5.28-r0)
Executing busybox-1.25.1-r0.trigger
OK: 9 MiB in 15 packages
/ # file /opt/nodejs/bin/node
/opt/nodejs/bin/node: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=862ecb804ed99547c06d5bd4ac1090da500acb61, not stripped
/ # ldd /opt/nodejs/bin/node
/lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
librt.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
Error loading shared library libstdc++.so.6: No such file or directory (needed by /opt/nodejs/bin/node)
libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
Error loading shared library libgcc_s.so.1: No such file or directory (needed by /opt/nodejs/bin/node)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
You can find a Dockerfile that installs node on Alpine from the docker hub official repo that would be a much better starting point.

Related

Compiling a containerized rust application to run on raspberry pi 4 (arm64/armv8). Can't execute "No such file or directory"

I am trying to run a rust application (server) on a raspberry pi (raspberry pi 4) cluster (k3s, docker). I can compile my docker image using buildx successfully and run it on the raspberry pi when targeting the arm64 architecture
ex: docker buildx build --load --platform=linux/arm64 -t myrepo/myapp:arm-0.0.1 .
Setting the dockerfile command to CMD ["echo", "hi i'm working!"], echos "hi i'm working!" as expected. This is nice because I know that buildx is working.
My issue comes when trying to get Rust to work as an executable in the container, the following is my dockerfile
FROM rust as builder
ARG APP_NAME="app"
ARG TARGET="x86_64-unknown-linux-musl"
ARG GITHUB_SSH_KEY=""
RUN apt-get update
RUN apt-get install musl-tools gcc-aarch64-linux-gnu gcc-arm-linux-gnueabihf -y
RUN rustup target add $TARGET;
RUN mkdir /usr/src/$APP_NAME
WORKDIR /usr/src/$APP_NAME
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true
COPY Cargo.toml Cargo.lock ./
COPY ./src ./src
RUN mkdir .cargo
RUN if [ "$TARGET" = "armv7-unknown-linux-gnueabihf" ]; then printf '\n\n[target.armv7-unknown-linux-gnueabihf] \nlinker = "arm-linux-gnueabihf-gcc"' >> .cargo/config.toml; fi
RUN if [ "$TARGET" = "aarch64-unknown-linux-gnu" ]; then printf '\n\n[target.aarch64-unknown-linux-gnu] \nlinker = "aarch64-linux-gnu-gcc"' >> .cargo/config.toml; fi
RUN mkdir /root/.ssh/
RUN echo "$GITHUB_SSH_KEY" > /root/.ssh/id_rsa;
RUN chmod 400 /root/.ssh/id_rsa
RUN ssh-keyscan -H github.com >> /etc/ssh/ssh_known_hosts
RUN cargo build --release --target=$TARGET
RUN groupadd -g 10001 -r $APP_NAME
RUN useradd -r -g $APP_NAME -u 10001 $APP_NAME
# ------------------------------------------------------------------------------
# Final Stage
# ------------------------------------------------------------------------------
FROM scratch
ARG APP_NAME="app"
ARG TARGET="x86_64-unknown-linux-musl"
WORKDIR /user/local/bin/
COPY --from=0 /etc/passwd /etc/passwd
COPY --from=builder /usr/src/$APP_NAME/target/$TARGET/release/$APP_NAME ./app
USER $APP_NAME
ENTRYPOINT ["./app"]
As you can see, I can change my target via build args and have tried armv7, aarch64, and even x86_64 out of blind desperation. All of them build without error. During run time x86_64 predictably failed with the typical "exec format error". However, in both armv7 and aarch64 the error is "no such file or directory". I exec'd into the container and could see that the executables were there, however, I cannot run them. When I inspected the file in the armv7 container I got the following output
ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=211fd9297da768ce435048457849b0ae6b22199a, with debug_info, not stripped
Hoping someone can tell me where I'm going wrong because so far I can't get the app to run containerized on my pi cluster. I'm not finding much helpful documentation out there on how to achieve what I am attempting so I thought I'd try asking here. Any help or insight would be greatly appreciated!
One thing to note is that it all compiles and runs fine without the cross-compilation bit so I am certain the app itself is working.
Also at this point for testing, I'm just trying to run a simple "hello, world!" app.
Your only using musl on x86_64. If you don't use musl and don't disable the standard library then the rust app will depend dynamically on system libc. That library doesn't exist in a scratch docker container. So either use a minimal container that does include a copy of libc like alpine, build a container from scratch that includes libc and its dependencies, or use musl on all architectures.
You should be able to see the libraries needed by running the ldd tool against the executable in the container.
The target strings for musl on arm are: armv7-unknown-linux-musleabihf and aarch64-unknown-linux-musl
I realized that if I removed the target definitions altogether and let the arm64 docker environment build the rust app it works as expected. Thanks to #user1937198, I found that using "aarch64-unknown-linux-musl" as the target allowed the static build of the arm rust in the container. The working docker file is below
FROM rust as builder
ARG APP_NAME="app"
ARG TARGET="aarch64-unknown-linux-musl"
ARG GITHUB_SSH_KEY=""
RUN apt-get update
RUN rustup target add $TARGET
RUN mkdir /usr/src/$APP_NAME
WORKDIR /usr/src/$APP_NAME
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true
COPY Cargo.toml Cargo.lock ./
COPY ./src ./src
RUN mkdir /root/.ssh/
RUN echo "$GITHUB_SSH_KEY" > /root/.ssh/id_rsa;
RUN chmod 400 /root/.ssh/id_rsa
RUN ssh-keyscan -H github.com >> /etc/ssh/ssh_known_hosts
RUN cargo build --release --target=$TARGET
RUN groupadd -g 10001 -r $APP_NAME
RUN useradd -r -g $APP_NAME -u 10001 $APP_NAME
# ------------------------------------------------------------------------------
# Final Stage
# ------------------------------------------------------------------------------
FROM scratch
ARG APP_NAME="app"
ARG TARGET="aarch64-unknown-linux-musl"
WORKDIR /user/local/bin/
COPY --from=0 /etc/passwd /etc/passwd
COPY --from=builder /usr/src/$APP_NAME/target/$TARGET/release/$APP_NAME ./app
USER $APP_NAME
CMD ["./app"]

Run protoc command into docker container

I'm trying to run protoc command into a docker container.
I've tried using the gRPC image but protoc command is not found:
/bin/sh: 1: protoc: not found
So I assume I have to install manually using RUN instructions, but is there a better solution? An official precompiled image with protoc installed?
Also, I've tried to install via Dockerfile but I'm getting again protoc: not found.
This is my Dockerfile
#I'm not using "FROM grpc/node" because that image can't unzip
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_32.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr/local
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin
RUN cp -R ./proto/include/* ${BASE}/include
RUN protoc -I=...
I've done RUN echo $PATH to ensure the folder is in path and is ok:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Also RUN ls -la /usr/local/bin to check protoc file is into the folder and it shows:
-rwxr-xr-x 1 root root 4849692 Jan 2 11:16 protoc
So the file is in /bin folder and the folder is in the path.
Have I missed something?
Also, is there a simple way to get the image with protoc installed? or the best option is generate my own image and pull from my repository?
Thanks in advance.
Edit: Solved downloading linux-x86_64 zip file instead of x86_32. I downloaded the lower architecture requirements thinking a x86_64 machine can run a x86_32 file but not in the other way. I don't know if I'm missing something about architecture requirements (It's probably) or is a bug.
Anyway in case it helps someone I found the solution and I've added an answer with the neccessary Dockerfile to run protoc and protoc-gen-grpc-web.
The easiest way to get non-default tools like this is to install them through the underlying Linux distribution's package manager.
First, look at the Docker Hub page for the node image. (For "library" images like node, construct the URL https://hub.docker.com/_/node.) You'll notice there that there are several variations named "alpine", "buster", or "stretch"; plain node:12 is the same as node:12-stretch and node:12.20.0-stretch. The "alpine" images are based on Alpine Linux; the "buster" and "stretch" ones are different versions of Debian GNU/Linux.
For Debian-based packages, you can then look up the package on https://packages.debian.org/ (type protoc into the "Search the contents of packages" form at the bottom of the page). That leads you to the protobuf-compiler package. Knowing that contains the protoc binary, you can install it in your Dockerfile with:
FROM node:12 # Debian-based
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
protobuf-compiler
# The rest of your Dockerfile as above
COPY ...
RUN protoc ...
You generally must run apt-get update and apt-get install in the same RUN command, lest a subsequent rebuild get an old version of the package cache from the Docker build cache. I generally have only a single apt-get install command if I can manage it, with the packages list alphabetically one to a line for maintainability.
If the image is Alpine-based, you can do a similar search on https://pkgs.alpinelinux.org/contents to find protoc, and similarly install it:
FROM node:12-alpine
RUN apk add --no-cache protoc
# The rest of your Dockerfile as above
Finally I solved my own issue.
The problem was the arch version: I was using linux-x86_32.zip but works using linux-x86_64.zip
Even #David Maze answer is incredible and so complete, it didn't solve my problem because using apt-get install version 3.0.0 and I wanted 3.14.0.
So, the Dockerfile I have used to run protoc into a docker container is like this:
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_64.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin/
RUN cp -R ./proto/include/* ${BASE}/include/
# Download protoc-gen-grpc-web
ENV GRPC_WEB=protoc-gen-grpc-web-1.2.1-linux-x86_64
ENV GRPC_WEB_PATH=/usr/bin/protoc-gen-grpc-web
RUN curl -OL https://github.com/grpc/grpc-web/releases/download/1.2.1/${GRPC_WEB}
# Copy into path
RUN mv ${GRPC_WEB} ${GRPC_WEB_PATH}
RUN chmod +x ${GRPC_WEB_PATH}
RUN protoc -I=...
Because this is currently the highest ranked result on Google and the above instructions above won't work, if you want to use docker/dind for e.g. gitlab, this is the way how you can get the glibc-dependency working for protoc there:
#!/bin/bash
# install gcompat, because protoc needs a real glibc or compatible layer
apk add gcompat
# install a recent protoc (use a version that fits your needs)
export PB_REL="https://github.com/protocolbuffers/protobuf/releases"
curl -LO $PB_REL/download/v3.20.0/protoc-3.20.0-linux-x86_64.zip
unzip protoc-3.20.0-linux-x86_64.zip -d $HOME/.local
export PATH="$PATH:$HOME/.local/bin"

Docker COPY is not copying script

Docker COPY is not copying over the bash script
FROM alpine:latest
#Install Go and Tini - These remain.
RUN apk add --no-cache go build-base gcc go
RUN apk add --no-cache --update ca-certificates redis git && update-ca-certificates
# Set Env Variables for Go and add Go to Path.
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN go get github.com/rakyll/hey
RUN echo GOLANG VERSION `go version`
COPY ./bench.sh /root/bench.sh
RUN chmod +x /root/bench.sh
ENTRYPOINT /root/bench.sh
Here is the script -
#!/bin/bash
set -e;
echo "entered";
hey;
I try running the above Dockerfile with
$ docker build -t test-bench .
$ docker run -it test-bench
But I get the error
/bin/sh: /root/bench.sh: not found
The file does exist -
$ docker run --rm -it test-bench sh
/ # ls
bin dev etc go home lib media mnt opt proc root run sbin srv sys tmp usr var
/ # cd root
~ # ls
bench.sh
~ #
Is your docker build successful. When I tried to simulate this, found the following error
---> Running in 96468658cebd
go: missing Git command. See https://golang.org/s/gogetcmd
package github.com/rakyll/hey: exec: "git": executable file not found in $PATH
The command '/bin/sh -c go get github.com/rakyll/hey' returned a non-zero code: 1
Try installing git using Dockerfile RUN apk add --no-cache go build-base gcc go git and run again.
The COPY operation here seems to be correct. Make sure it is present in the directory from where docker build is executed.
Okay, the script is using /bin/bash the bash binary is not available in the alpine image. Either it has to be installed or a /bin/sh shell should be used

cannot run jfrog executable from inside alpine linux container

I am using an alpine linux container and specifically python:3.4-alpine and openjdk:8-jdk-alpine. When I try to execute any script or executable that I have placed in the executable I get Not Found error.
For example. When in the python:3.4-alpine container I want to install jfrog I follow the command here (after I install curl via apk). This command downloads a shell script and pipes it to sh which downloads and creates a jfrog executable with the correct permissions. When I am trying to run this executable I am getting
bin/sh: ./jfrog: not found
update
I discovered that the root user is using bin/ash by default, which I have no idea what it is. So I invoked bin/sh jfrog manually and I get
/ # bin/sh jfrog
jfrog: line 1: ELF: not found
jfrog: line 1: syntax error: unterminated quoted string
Any idea what I am doing wrong? I suspect that it has to do with only root user existing in the container.
I'm not sure but the jfrog executable is dynamically linked, and with ldd jfrog you get :
ldd jfrog
/lib64/ld-linux-x86-64.so.2 (0x55ffb4c8d000)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x55ffb4c8d000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x55ffb4c8d000)
As you can see you have libc dependencies, and alpine come with musl.
You can try to add apk add libc6-compat but I'm not sure it will work
the problem is, that jfrog cli was compiled against glibc and alpine linux only provides uclibc. To make it run under alpine its not trivial, you have to install a sandbox that is bigger than then alpine env. https://wiki.alpinelinux.org/wiki/Running_glibc_programs
Another possibility is to compile the jfrog binary yourself in alpine. This Dockerfile worked for me.
FROM golang:alpine
WORKDIR /app/
RUN apk update && apk add git
# checkout the latest tag of jfrog cli
RUN mkdir -p /go/src/github.com/jfrogdev/jfrog-cli-go \
&& git clone https://github.com/JFrogDev/jfrog-cli-go /go/src/github.com/jfrogdev/jfrog-cli-go\
&& cd /go/src/github.com/jfrogdev/jfrog-cli-go \
&& git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
RUN GOOS=linux go get github.com/jfrogdev/jfrog-cli-go/jfrog
FROM alpine
COPY --from=0 /go/bin/jfrog /usr/bin/
ENTRYPOINT ["jfrog"]
The script you are running begins with:
#!/bin/bash
Bash is not included with alpine by default. You can install it with:
apk update && apk add bash
Note that alpine is fairly stripped down by design, so there may be other missing dependencies that you'll need to add to make this script work.
May be too late, but this probably might help someone else.
RUN curl -Lo /usr/bin/jfrog https://api.bintray.com/content/jfrog/jfrog-cli-go/\$latest/jfrog-cli-linux-386/jfrog?bt_package=jfrog-cli-linux-386 \
&& chmod a+x /usr/bin/jfrog
(Click Here for Reference Link)

Docker : oci runtime error: exec: "/bin/bash": stat /bin in windows 7

I am using windows 7. In my home folder I made a new directory Docker. And inside that I made new directory rails.
This is my docker file: (Docker/rails/Dockerfile)
FROM alpine:3.2
MAINTAINER xxx <xxx#xxx.in>
ENV BUILD_PACKAGES bash curl-dev ruby-dev build-base
ENV RUBY_PACKAGES ruby ruby-io-console ruby-bundler
# Update and install all of the required packages.
# At the end, remove the apk cache
RUN apk update && \
apk upgrade && \
apk add $BUILD_PACKAGES && \
apk add $RUBY_PACKAGES && \
rm -rf /var/cache/apk/*
RUN mkdir /usr/app
WORKDIR /usr/app
COPY Gemfile /usr/app/
COPY Gemfile.lock /usr/app/
RUN bundle install
COPY . /usr/app
And then I changed directory to Docker. On ls it shows rails.
Then I typed this command:
docker build rails
Now the image name is alpine. I made a tag to rails like this:
docker tag <imageid> myname/rails
Problem:
The image is successfully build and I have a repository rails and pushed it successfully. I am able to pull it as well.
Till now everything is fine, but then I run this command:
docker run -i -t xxx/rails /bin/bash
It gives me this error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: oci runtime error: exec: "/bin/bash": stat /bin/bash: no such file or directory.
So I am stuck there.
My Objective:
I want to run this command successfully:
rails -v
To run that command I need to install the image, and I don't know how to install the image, I have been following up numerous tutorials since last week.
I am new to docker. This is my first docker image.
Edit:
docker exec -it sh
Alpine does not come with bash by default, only /bin/sh so you should change your command to:
docker run -i -t vikaran/rails sh
Also worth noting you can run:
docker build -t myname/rails rails
To automatically tag the image when building it.

Resources