cannot run jfrog executable from inside alpine linux container - docker

I am using an alpine linux container and specifically python:3.4-alpine and openjdk:8-jdk-alpine. When I try to execute any script or executable that I have placed in the executable I get Not Found error.
For example. When in the python:3.4-alpine container I want to install jfrog I follow the command here (after I install curl via apk). This command downloads a shell script and pipes it to sh which downloads and creates a jfrog executable with the correct permissions. When I am trying to run this executable I am getting
bin/sh: ./jfrog: not found
update
I discovered that the root user is using bin/ash by default, which I have no idea what it is. So I invoked bin/sh jfrog manually and I get
/ # bin/sh jfrog
jfrog: line 1: ELF: not found
jfrog: line 1: syntax error: unterminated quoted string
Any idea what I am doing wrong? I suspect that it has to do with only root user existing in the container.

I'm not sure but the jfrog executable is dynamically linked, and with ldd jfrog you get :
ldd jfrog
/lib64/ld-linux-x86-64.so.2 (0x55ffb4c8d000)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x55ffb4c8d000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x55ffb4c8d000)
As you can see you have libc dependencies, and alpine come with musl.
You can try to add apk add libc6-compat but I'm not sure it will work

the problem is, that jfrog cli was compiled against glibc and alpine linux only provides uclibc. To make it run under alpine its not trivial, you have to install a sandbox that is bigger than then alpine env. https://wiki.alpinelinux.org/wiki/Running_glibc_programs
Another possibility is to compile the jfrog binary yourself in alpine. This Dockerfile worked for me.
FROM golang:alpine
WORKDIR /app/
RUN apk update && apk add git
# checkout the latest tag of jfrog cli
RUN mkdir -p /go/src/github.com/jfrogdev/jfrog-cli-go \
&& git clone https://github.com/JFrogDev/jfrog-cli-go /go/src/github.com/jfrogdev/jfrog-cli-go\
&& cd /go/src/github.com/jfrogdev/jfrog-cli-go \
&& git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
RUN GOOS=linux go get github.com/jfrogdev/jfrog-cli-go/jfrog
FROM alpine
COPY --from=0 /go/bin/jfrog /usr/bin/
ENTRYPOINT ["jfrog"]

The script you are running begins with:
#!/bin/bash
Bash is not included with alpine by default. You can install it with:
apk update && apk add bash
Note that alpine is fairly stripped down by design, so there may be other missing dependencies that you'll need to add to make this script work.

May be too late, but this probably might help someone else.
RUN curl -Lo /usr/bin/jfrog https://api.bintray.com/content/jfrog/jfrog-cli-go/\$latest/jfrog-cli-linux-386/jfrog?bt_package=jfrog-cli-linux-386 \
&& chmod a+x /usr/bin/jfrog
(Click Here for Reference Link)

Related

Copy file from dockerfile build to host - bandit

I just started learning docker. To teach myself, I managed to containerize bandit (a python code scanner) but I'm not able to see the output of the scan before the container destroys itself. How can I copy the output file from inside the container to the host, or otherwise save it?
Right now i'm just using bandit to scan itself basically :)
Dockerfile
FROM python:3-alpine
WORKDIR /
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git ./code-to-scan
CMD [ "python -m bandit -r ./code-to-scan -o bandit.txt" ]
You can mount a volume on you host where you can share the output of bandit.
For example, you can run your container with:
docker run -v $(pwd)/output:/tmp/output -t your_awesome_container:latest
And you in your dockerfile:
...
CMD [ "python -m bandit -r ./code-to-scan -o /tmp/bandit.txt" ]
This way the bandit.txt file will be found in the output folder.
Better place the code in your image not in the root directory.
I did some adjustments to your Dockerfile.
FROM python:3-alpine
WORKDIR /usr/myapp
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git .
CMD [ "bandit","-r",".","-o","bandit.txt" ]`
This clones git in your WORKDIR.
Note the CMD, it is an array, so just devide all commands and args as in the Dockerfile about.
I put the the Dockerfile in my D:\test directory (Windows).
docker build -t test .
docker run -v D:/test/:/usr/myapp test
It will generate you bandit.txt in the test folder.
After the code is execute the container exits, as there are nothing else to do.
you can also put --rm to remove the container once it finishs.
docker run --rm -v D:/test/:/usr/myapp test

Automatic build of docker container (java backend, angular frontend)

I would like to say that this is my first container and actually my first JAVA app so maybe I will have basic questions so be lenient, please.
I wrote spring boot app and my colleague has written the frontend part for it in angular. What I would like to achieve is to have "one button/one command" in IntelliJ to create a container containing whole app backend and front end.
What I need to do is:
Clone FE from company repository (I am using ssh key now)
Clone BE from GitHub
Build FE
Copy built FE to static folder in java app
Build BE
Create a container running this app
My current solution is to create "builder" container and there build FE and BE and then copy it to "production" container like this:
#BUILDER
FROM alpine AS builder
WORKDIR /src
# add credentials on build
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/ \
&& echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa \
&& echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
&& chmod 600 /root/.ssh/id_rsa
# installing dependencies
RUN apk update && apk upgrade && apk add --update nodejs nodejs-npm \
&& npm install -g #angular/cli \
&& apk add openjdk11 \
&& apk add maven \
&& apk add --no-cache openssh \
&& apk add --no-cache git
#cloning repositories
RUN git clone git#code.siemens.com:apcprague/edge/metal-forming-fe.git
RUN git clone git#github.com:bzumik1/metalForming.git
# builds front end
WORKDIR /src/metal-forming-fe
RUN npm install && ng build
# builds whole java app with front end
WORKDIR /src/metalForming
RUN cp -a /src/metal-forming-fe/dist/metal-forming/. /src/metalForming/src/main/resources/static \
&& mvn install -DskipTests=true
#PRODUCTION CONTAINER
FROM adoptopenjdk/openjdk11:debian-slim
LABEL maintainer jakub.znamenacek#siemens.com
RUN mkdir app
RUN ["chmod", "+rwx", "/app"]
WORKDIR /app
COPY --from=builder /src/metalForming/target/metal_forming-0.0.1-SNAPSHOT.jar .
EXPOSE 4200
RUN java -version
CMD java -jar metal_forming-0.0.1-SNAPSHOT.jar
This works but I takes very long time so I guess this is not correct way how to do it. Could anyone point me in correct direction? I was thinking if there is a way how to make maven to all these steps for me but maybe this is totally off.
Also if you will find any problem in my Dockerfile please let me know as I said this is my first Dockerfile so I could overlook something.
EDITED:
BTW does anyone know how can I get rid of this part:
echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
it adds GitHub to known_hosts (I also need to add a company repository there). It is because when I run git clone it will ask me if I trust this ... and I have to write yes but I don't know how to do it if it is automatically running in docker and I have no option to write there yes. I have tried yes | git clone ... but this is also not working
a few things:
1, if this runs "slow" on the machine than it will run slow inside a container too.
2, remove --no-cache,* you want to cache everything that is static, because next time when you build those commands will not run where there is no change. Once there is change in one command than that command will rerun instead using the builder cache and also all subsequent commands will have to rerun too.
*UPDATE: I have mistaken "apk update --no-cache" with "docker build --no-cache". I stated wrong that "apk add --no-cache" would mean that command is not cached, because this command is cached on docker builder level. However with this parameter you wouldn't need to delete in a later step the /var/cache/apk/ directory to make you image smaller, but that you wouldn't need to do here, because you are already using multi stage build, so this would not affect your final image size.
One more thing to clarify, all statements in Dockerfile are checked if they changed, if they did not than docker builder uses the cached layer for it and won't run that statement. Exception is ADD and COPY commands, here builder also checks the copied, added files if they changed with checksum. Also if a statement is changed or ADD-ed COPY-ed file(s) changed than that statement is re-run and all subsequent statements re-run too, so you want to put your source code copy statemant as much at the end as it is possible
If you want to disable this cache, do "docker build --no-cache ..." this way all the steps will be re-run that is in the Dockerfile.
3, specify WORKDIR at the top once, if you need to switch directory later use this:
RUN cd /someotherdir && mycommand
Also specifying a Subsequent WORKDIR will be relativ to the previous WORKDIR so it will mess up readibilty what is the (probably) sole purpose of WORKDIR statement.
4, Enable BuildKit:
Either declare environment variable
DOCKER_BUILDKIT=1
or add this to /etc/docker/daemon.json
{ "features": { "buildkit": true } }
BuildKit might not help in this case, but if you do more complex Dockerfiles with more stages Buildkit can run those parallel so overall build will be faster.
5, Do not skip tests with DskipTests=true :)
6, as stated in a comment, do not clone the repo inside the image build, you do not need to do that at all. Just put the Dockerfile in the / of the repo, and COPY the repo files with a Dockerfile command:
COPY . .
First dot is the source that is your current directory on your machine, second dot is the target, the working dir inside the image, /src with your Dockerfile. You build the image and publish it, push it to a docker registry so others can pull it and start using it. If you want more complex stuff building and publishing with a help of a server, look up CI/CD techniques.

Run protoc command into docker container

I'm trying to run protoc command into a docker container.
I've tried using the gRPC image but protoc command is not found:
/bin/sh: 1: protoc: not found
So I assume I have to install manually using RUN instructions, but is there a better solution? An official precompiled image with protoc installed?
Also, I've tried to install via Dockerfile but I'm getting again protoc: not found.
This is my Dockerfile
#I'm not using "FROM grpc/node" because that image can't unzip
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_32.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr/local
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin
RUN cp -R ./proto/include/* ${BASE}/include
RUN protoc -I=...
I've done RUN echo $PATH to ensure the folder is in path and is ok:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Also RUN ls -la /usr/local/bin to check protoc file is into the folder and it shows:
-rwxr-xr-x 1 root root 4849692 Jan 2 11:16 protoc
So the file is in /bin folder and the folder is in the path.
Have I missed something?
Also, is there a simple way to get the image with protoc installed? or the best option is generate my own image and pull from my repository?
Thanks in advance.
Edit: Solved downloading linux-x86_64 zip file instead of x86_32. I downloaded the lower architecture requirements thinking a x86_64 machine can run a x86_32 file but not in the other way. I don't know if I'm missing something about architecture requirements (It's probably) or is a bug.
Anyway in case it helps someone I found the solution and I've added an answer with the neccessary Dockerfile to run protoc and protoc-gen-grpc-web.
The easiest way to get non-default tools like this is to install them through the underlying Linux distribution's package manager.
First, look at the Docker Hub page for the node image. (For "library" images like node, construct the URL https://hub.docker.com/_/node.) You'll notice there that there are several variations named "alpine", "buster", or "stretch"; plain node:12 is the same as node:12-stretch and node:12.20.0-stretch. The "alpine" images are based on Alpine Linux; the "buster" and "stretch" ones are different versions of Debian GNU/Linux.
For Debian-based packages, you can then look up the package on https://packages.debian.org/ (type protoc into the "Search the contents of packages" form at the bottom of the page). That leads you to the protobuf-compiler package. Knowing that contains the protoc binary, you can install it in your Dockerfile with:
FROM node:12 # Debian-based
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
protobuf-compiler
# The rest of your Dockerfile as above
COPY ...
RUN protoc ...
You generally must run apt-get update and apt-get install in the same RUN command, lest a subsequent rebuild get an old version of the package cache from the Docker build cache. I generally have only a single apt-get install command if I can manage it, with the packages list alphabetically one to a line for maintainability.
If the image is Alpine-based, you can do a similar search on https://pkgs.alpinelinux.org/contents to find protoc, and similarly install it:
FROM node:12-alpine
RUN apk add --no-cache protoc
# The rest of your Dockerfile as above
Finally I solved my own issue.
The problem was the arch version: I was using linux-x86_32.zip but works using linux-x86_64.zip
Even #David Maze answer is incredible and so complete, it didn't solve my problem because using apt-get install version 3.0.0 and I wanted 3.14.0.
So, the Dockerfile I have used to run protoc into a docker container is like this:
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_64.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin/
RUN cp -R ./proto/include/* ${BASE}/include/
# Download protoc-gen-grpc-web
ENV GRPC_WEB=protoc-gen-grpc-web-1.2.1-linux-x86_64
ENV GRPC_WEB_PATH=/usr/bin/protoc-gen-grpc-web
RUN curl -OL https://github.com/grpc/grpc-web/releases/download/1.2.1/${GRPC_WEB}
# Copy into path
RUN mv ${GRPC_WEB} ${GRPC_WEB_PATH}
RUN chmod +x ${GRPC_WEB_PATH}
RUN protoc -I=...
Because this is currently the highest ranked result on Google and the above instructions above won't work, if you want to use docker/dind for e.g. gitlab, this is the way how you can get the glibc-dependency working for protoc there:
#!/bin/bash
# install gcompat, because protoc needs a real glibc or compatible layer
apk add gcompat
# install a recent protoc (use a version that fits your needs)
export PB_REL="https://github.com/protocolbuffers/protobuf/releases"
curl -LO $PB_REL/download/v3.20.0/protoc-3.20.0-linux-x86_64.zip
unzip protoc-3.20.0-linux-x86_64.zip -d $HOME/.local
export PATH="$PATH:$HOME/.local/bin"

Import path does not begin with hostname

I have a Go app. Some of its dependencies are in a private Github repo and another part of dependencies are local packages in my app folder. The app compiles and works on my computer without a problem (when I simply compile it without docker). I am using the below Dockerfile.
FROM ubuntu as intermediate
# install git
RUN apt-get update
RUN apt-get install -y git
RUN mkdir /root/.ssh/
COPY github_rsa.ppk /root/.ssh/github_rsa.ppk
RUN chmod 700 /root/.ssh/github_rsa.ppk
RUN eval $(ssh-agent) && \
ssh-add /root/.ssh/github_rsa.ppk && \
ssh-keyscan -H github.com >> /etc/ssh/ssh_known_hosts && \
git clone git#github.myusername/shared.git
FROM golang:latest
ENV GOPATH=/go
RUN echo $GOPATH
ADD . /go/src/SCMicroServer
WORKDIR /go/src/SCMicroServer
COPY --from=intermediate /shared /go/src/github.com/myusername/shared
RUN go get /go/src/SCMicroServer
RUN go install SCMicroServer
ENTRYPOINT /go/src/SCMicroServer
EXPOSE 8080
First build section related to Git is working fine, It works until this line: RUN go get /go/src/SCMicroServer in second section. I receive this error in mentioned step:
package SCMicroServer/controllers/package1: unrecognized import path "SCMicroServer/controllers/package1" (import path does not begin with hostname)
The command '/bin/sh -c go get /go/src/SCMicroServer' returned a non-zero code: 1
"SCMicroServer/controllers/package1" is one of the local packages in my app folder (or its subfolders) and I have many more in my local folder. I am setting GOPATH env variable in my Dockerfile, so I am not sure what I am missing.
I found the answer, it was not really Dockerfile problem, I referenced my package 2 times in 2 different way in my main file:
package1 "SCMicroServer/controllers/package1"
"SCMicroServer/controllers/package1"
After I removed the second one, I stopped receiving the error.

Alpine Docker container doesn't recognize UPX as an executable

Stackers,
I'm using Docker to containerize my app. In the stage below, I'm trying to pack it using UPX.
FROM alpine:3.8 AS compressor
# Version of upx to be used(without the 'v' prefix)
# For all releases, see https://github.com/upx/upx/releases
ARG UPX_VERSION=3.94
# Fetch upx, decompress it, make it executable.
ADD https://github.com/upx/upx/releases/download/v${UPX_VERSION}/upx-${UPX_VERSION}-amd64_linux.tar.xz /tmp/upx.tar.xy
RUN tar -xJOf /tmp/upx.tar.xy upx-${UPX_VERSION}-amd64_linux/upx > /usr/local/bin/upx \
&& chmod +x /usr/local/bin/upx
COPY --from=builder /usr/local/bin/ace /usr/local/bin/ace
RUN /usr/local/bin/upx --overlay=strip --best /usr/local/bin/ace
The thing is when I build the image I get the following error:
The command '/bin/sh -c /usr/local/bin/upx --overlay=strip --best
/usr/local/bin/ace' returned a non-zero code: 127
For some reason, the container doesn't recognize upx as executable! Can anyone give me some pointers?
Turns out there is an apk package for UPX. The easier way to install is:
apk add upx
The archive you have is just the file, so you can't address it by name. If you extract the whole thing (which is just the file) it works. Change the corresponding line in your Dockerfile to the following:
RUN tar -xJOf /tmp/upx.tar.xy upx-${UPX_VERSION}-amd64_linux > /usr/local/bin/upx \
Note that I removed the filename from the archive name.
Add apk add --no-cache upx=3.95-r1 as for March 2019.
To get it working with Dockerfile:
(adjust Alpine version to your liking)
FROM alpine:latest as build
RUN apk add --no-cache upx
# Build and use upx as applicable...

Resources