I have a Go app. Some of its dependencies are in a private Github repo and another part of dependencies are local packages in my app folder. The app compiles and works on my computer without a problem (when I simply compile it without docker). I am using the below Dockerfile.
FROM ubuntu as intermediate
# install git
RUN apt-get update
RUN apt-get install -y git
RUN mkdir /root/.ssh/
COPY github_rsa.ppk /root/.ssh/github_rsa.ppk
RUN chmod 700 /root/.ssh/github_rsa.ppk
RUN eval $(ssh-agent) && \
ssh-add /root/.ssh/github_rsa.ppk && \
ssh-keyscan -H github.com >> /etc/ssh/ssh_known_hosts && \
git clone git#github.myusername/shared.git
FROM golang:latest
ENV GOPATH=/go
RUN echo $GOPATH
ADD . /go/src/SCMicroServer
WORKDIR /go/src/SCMicroServer
COPY --from=intermediate /shared /go/src/github.com/myusername/shared
RUN go get /go/src/SCMicroServer
RUN go install SCMicroServer
ENTRYPOINT /go/src/SCMicroServer
EXPOSE 8080
First build section related to Git is working fine, It works until this line: RUN go get /go/src/SCMicroServer in second section. I receive this error in mentioned step:
package SCMicroServer/controllers/package1: unrecognized import path "SCMicroServer/controllers/package1" (import path does not begin with hostname)
The command '/bin/sh -c go get /go/src/SCMicroServer' returned a non-zero code: 1
"SCMicroServer/controllers/package1" is one of the local packages in my app folder (or its subfolders) and I have many more in my local folder. I am setting GOPATH env variable in my Dockerfile, so I am not sure what I am missing.
I found the answer, it was not really Dockerfile problem, I referenced my package 2 times in 2 different way in my main file:
package1 "SCMicroServer/controllers/package1"
"SCMicroServer/controllers/package1"
After I removed the second one, I stopped receiving the error.
Related
I would like to say that this is my first container and actually my first JAVA app so maybe I will have basic questions so be lenient, please.
I wrote spring boot app and my colleague has written the frontend part for it in angular. What I would like to achieve is to have "one button/one command" in IntelliJ to create a container containing whole app backend and front end.
What I need to do is:
Clone FE from company repository (I am using ssh key now)
Clone BE from GitHub
Build FE
Copy built FE to static folder in java app
Build BE
Create a container running this app
My current solution is to create "builder" container and there build FE and BE and then copy it to "production" container like this:
#BUILDER
FROM alpine AS builder
WORKDIR /src
# add credentials on build
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/ \
&& echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa \
&& echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
&& chmod 600 /root/.ssh/id_rsa
# installing dependencies
RUN apk update && apk upgrade && apk add --update nodejs nodejs-npm \
&& npm install -g #angular/cli \
&& apk add openjdk11 \
&& apk add maven \
&& apk add --no-cache openssh \
&& apk add --no-cache git
#cloning repositories
RUN git clone git#code.siemens.com:apcprague/edge/metal-forming-fe.git
RUN git clone git#github.com:bzumik1/metalForming.git
# builds front end
WORKDIR /src/metal-forming-fe
RUN npm install && ng build
# builds whole java app with front end
WORKDIR /src/metalForming
RUN cp -a /src/metal-forming-fe/dist/metal-forming/. /src/metalForming/src/main/resources/static \
&& mvn install -DskipTests=true
#PRODUCTION CONTAINER
FROM adoptopenjdk/openjdk11:debian-slim
LABEL maintainer jakub.znamenacek#siemens.com
RUN mkdir app
RUN ["chmod", "+rwx", "/app"]
WORKDIR /app
COPY --from=builder /src/metalForming/target/metal_forming-0.0.1-SNAPSHOT.jar .
EXPOSE 4200
RUN java -version
CMD java -jar metal_forming-0.0.1-SNAPSHOT.jar
This works but I takes very long time so I guess this is not correct way how to do it. Could anyone point me in correct direction? I was thinking if there is a way how to make maven to all these steps for me but maybe this is totally off.
Also if you will find any problem in my Dockerfile please let me know as I said this is my first Dockerfile so I could overlook something.
EDITED:
BTW does anyone know how can I get rid of this part:
echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
it adds GitHub to known_hosts (I also need to add a company repository there). It is because when I run git clone it will ask me if I trust this ... and I have to write yes but I don't know how to do it if it is automatically running in docker and I have no option to write there yes. I have tried yes | git clone ... but this is also not working
a few things:
1, if this runs "slow" on the machine than it will run slow inside a container too.
2, remove --no-cache,* you want to cache everything that is static, because next time when you build those commands will not run where there is no change. Once there is change in one command than that command will rerun instead using the builder cache and also all subsequent commands will have to rerun too.
*UPDATE: I have mistaken "apk update --no-cache" with "docker build --no-cache". I stated wrong that "apk add --no-cache" would mean that command is not cached, because this command is cached on docker builder level. However with this parameter you wouldn't need to delete in a later step the /var/cache/apk/ directory to make you image smaller, but that you wouldn't need to do here, because you are already using multi stage build, so this would not affect your final image size.
One more thing to clarify, all statements in Dockerfile are checked if they changed, if they did not than docker builder uses the cached layer for it and won't run that statement. Exception is ADD and COPY commands, here builder also checks the copied, added files if they changed with checksum. Also if a statement is changed or ADD-ed COPY-ed file(s) changed than that statement is re-run and all subsequent statements re-run too, so you want to put your source code copy statemant as much at the end as it is possible
If you want to disable this cache, do "docker build --no-cache ..." this way all the steps will be re-run that is in the Dockerfile.
3, specify WORKDIR at the top once, if you need to switch directory later use this:
RUN cd /someotherdir && mycommand
Also specifying a Subsequent WORKDIR will be relativ to the previous WORKDIR so it will mess up readibilty what is the (probably) sole purpose of WORKDIR statement.
4, Enable BuildKit:
Either declare environment variable
DOCKER_BUILDKIT=1
or add this to /etc/docker/daemon.json
{ "features": { "buildkit": true } }
BuildKit might not help in this case, but if you do more complex Dockerfiles with more stages Buildkit can run those parallel so overall build will be faster.
5, Do not skip tests with DskipTests=true :)
6, as stated in a comment, do not clone the repo inside the image build, you do not need to do that at all. Just put the Dockerfile in the / of the repo, and COPY the repo files with a Dockerfile command:
COPY . .
First dot is the source that is your current directory on your machine, second dot is the target, the working dir inside the image, /src with your Dockerfile. You build the image and publish it, push it to a docker registry so others can pull it and start using it. If you want more complex stuff building and publishing with a help of a server, look up CI/CD techniques.
I'm trying to run protoc command into a docker container.
I've tried using the gRPC image but protoc command is not found:
/bin/sh: 1: protoc: not found
So I assume I have to install manually using RUN instructions, but is there a better solution? An official precompiled image with protoc installed?
Also, I've tried to install via Dockerfile but I'm getting again protoc: not found.
This is my Dockerfile
#I'm not using "FROM grpc/node" because that image can't unzip
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_32.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr/local
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin
RUN cp -R ./proto/include/* ${BASE}/include
RUN protoc -I=...
I've done RUN echo $PATH to ensure the folder is in path and is ok:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Also RUN ls -la /usr/local/bin to check protoc file is into the folder and it shows:
-rwxr-xr-x 1 root root 4849692 Jan 2 11:16 protoc
So the file is in /bin folder and the folder is in the path.
Have I missed something?
Also, is there a simple way to get the image with protoc installed? or the best option is generate my own image and pull from my repository?
Thanks in advance.
Edit: Solved downloading linux-x86_64 zip file instead of x86_32. I downloaded the lower architecture requirements thinking a x86_64 machine can run a x86_32 file but not in the other way. I don't know if I'm missing something about architecture requirements (It's probably) or is a bug.
Anyway in case it helps someone I found the solution and I've added an answer with the neccessary Dockerfile to run protoc and protoc-gen-grpc-web.
The easiest way to get non-default tools like this is to install them through the underlying Linux distribution's package manager.
First, look at the Docker Hub page for the node image. (For "library" images like node, construct the URL https://hub.docker.com/_/node.) You'll notice there that there are several variations named "alpine", "buster", or "stretch"; plain node:12 is the same as node:12-stretch and node:12.20.0-stretch. The "alpine" images are based on Alpine Linux; the "buster" and "stretch" ones are different versions of Debian GNU/Linux.
For Debian-based packages, you can then look up the package on https://packages.debian.org/ (type protoc into the "Search the contents of packages" form at the bottom of the page). That leads you to the protobuf-compiler package. Knowing that contains the protoc binary, you can install it in your Dockerfile with:
FROM node:12 # Debian-based
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
protobuf-compiler
# The rest of your Dockerfile as above
COPY ...
RUN protoc ...
You generally must run apt-get update and apt-get install in the same RUN command, lest a subsequent rebuild get an old version of the package cache from the Docker build cache. I generally have only a single apt-get install command if I can manage it, with the packages list alphabetically one to a line for maintainability.
If the image is Alpine-based, you can do a similar search on https://pkgs.alpinelinux.org/contents to find protoc, and similarly install it:
FROM node:12-alpine
RUN apk add --no-cache protoc
# The rest of your Dockerfile as above
Finally I solved my own issue.
The problem was the arch version: I was using linux-x86_32.zip but works using linux-x86_64.zip
Even #David Maze answer is incredible and so complete, it didn't solve my problem because using apt-get install version 3.0.0 and I wanted 3.14.0.
So, the Dockerfile I have used to run protoc into a docker container is like this:
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_64.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin/
RUN cp -R ./proto/include/* ${BASE}/include/
# Download protoc-gen-grpc-web
ENV GRPC_WEB=protoc-gen-grpc-web-1.2.1-linux-x86_64
ENV GRPC_WEB_PATH=/usr/bin/protoc-gen-grpc-web
RUN curl -OL https://github.com/grpc/grpc-web/releases/download/1.2.1/${GRPC_WEB}
# Copy into path
RUN mv ${GRPC_WEB} ${GRPC_WEB_PATH}
RUN chmod +x ${GRPC_WEB_PATH}
RUN protoc -I=...
Because this is currently the highest ranked result on Google and the above instructions above won't work, if you want to use docker/dind for e.g. gitlab, this is the way how you can get the glibc-dependency working for protoc there:
#!/bin/bash
# install gcompat, because protoc needs a real glibc or compatible layer
apk add gcompat
# install a recent protoc (use a version that fits your needs)
export PB_REL="https://github.com/protocolbuffers/protobuf/releases"
curl -LO $PB_REL/download/v3.20.0/protoc-3.20.0-linux-x86_64.zip
unzip protoc-3.20.0-linux-x86_64.zip -d $HOME/.local
export PATH="$PATH:$HOME/.local/bin"
So, I have built a beat with mage GenerateCustomBeat and it runs okay, except, now I'm trying to cotainerize it. When I run the image I built, it complains that no customBeat.yml was found.
I have secured that the file exists in the folder by adding a line RUN ls . at the end of my Dockerfile.
The beat name is coletorbeat, so this name appears multiple times inside the Dockerfile.
Upon executing sudo docker run coletorbeat I have the following error message:
Exiting: error loading config file: stat coletorbeat.yml: no such file or directory
If there was a way to specify the coletorbeat.yml file location when I execute the beat, in CMD I think I would solve it, but I have not found how to do so yet.
I'll post the Dockerfile below. I know the code inside the beater folder works fine. I'm guessing I'm making some mistake on the containerization.
Dockerfile:
FROM ubuntu
MAINTAINER myNameHere
ARG ${ip:-"333.333.333.333"}
ARG ${porta:-"4343"}
ARG ${dataInicio:-"2020-01-07"}
ARG ${dataFim:-"2020-01-07"}
ARG ${tipoEquipamento:-"type"}
ARG ${versao:-"2"}
ARG ${nivel:-"0"}
ARG ${instituicao:-"RJ"}
ADD . .
RUN mkdir /etc/coletorbeat
COPY /coletorbeat/coletorbeat.yml /etc/coletorbeat/coletorbeat.yml
RUN apt-get update && \
apt-get install -y wget git
RUN wget https://storage.googleapis.com/golang/go1.14.4.linux-amd64.tar.gz
RUN tar -zxvf go1.14.*.linux-amd64.tar.gz -C /usr/local
RUN mkdir /go
ENV GOROOT /usr/local/go
ENV GOPATH $HOME/go
ENV PATH $PATH:$GOROOT/bin:$GOPATH/bin
RUN echo $PATH
RUN go get -u -d github.com/magefile/mage
RUN cd $GOPATH/src/github.com/magefile/mage && \
go run bootstrap.go
RUN apt-get install -y python3-venv
RUN apt-get install -y build-essential
RUN cd /coletorbeat && chmod go-w coletorbeat.yml && ./coletorbeat setup
RUN cd /coletorbeat && ./coletorbeat test config -c /coletorbeat/coletorbeat.yml && ls .
CMD ./coletorbeat/coletorbeat -E 'coletorbeat.ip=${ip}'
You are adding the yml file into the /etc dir
COPY /coletorbeat/coletorbeat.yml /etc/coletorbeat/coletorbeat.yml
But then running commands on /coletorbeat without using etc.
On CMD line in the Dockerfile, I added the command cd /mybeatfolder and it worked. Libbeat searches the current folder for the config file as default, so moving to the right directory before executing my beat solved it.
Problem Statement
I am building a docker of my computational bioinformatics pipeline which contains many tools that will be called at different steps of pipelines. In this process, I am trying to add one tool The ViennaRNA Package which will be downloaded and compliled using source code. I have tried many ways to compile it in docker build (as shown below) but none of them is working.
Failed attempts
Code-1 :
FROM jupyter/scipy-notebook
USER root
MAINTAINER Vivek Ruhela <vivekr#iiitd.ac.in>
# Copy the application folder inside the container
ADD . /test1
# Set the default directory where CMD will execute
WORKDIR /test1
# Set environment variable
ENV HOME /test1
# Install RNAFold
RUN wget https://www.tbi.univie.ac.at/RNA/download/sourcecode/2_4_x/ViennaRNA-2.4.14.tar.gz -P ~/Tools
RUN tar xvzf ~/Tools/ViennaRNA-2.4.14.tar.gz -C ~/Tools
WORKDIR "~/Tools/ViennaRNA-2.4.14/"
RUN ./configure
RUN make && make check && make install
Error : configure file not found
Code-2 :
FROM jupyter/scipy-notebook
USER root
MAINTAINER Vivek Ruhela <vivekr#iiitd.ac.in>
# Copy the application folder inside the container
ADD . /test1
# Set the default directory where CMD will execute
WORKDIR /test1
# Set environment variable
ENV HOME /test1
# Install RNAFold
RUN wget https://www.tbi.univie.ac.at/RNA/download/sourcecode/2_4_x/ViennaRNA-2.4.14.tar.gz -P ~/Tools
RUN tar xvzf ~/Tools/ViennaRNA-2.4.14.tar.gz -C ~/Tools
RUN bash ~/Tools/ViennaRNA-2.4.14/configure
WORKDIR "~/Tools/ViennaRNA-2.4.14/"
RUN make && make check && make install
Error : make: *** No targets specified and no makefile found. Stop.
I also tried another way to tell the file location explicitly e.g.
RUN make -C ~/Tools/ViennaRNA-2.4.14/
Sill this approach is not working.
Expected Procedure
I have installed this tool in my system many times using the standard procedure as mentioned in tool documentation as
./configure
make
make check
make install
Similarly for docker, the following code should work
WORKDIR ~/Tools/ViennaRNA-2.4.14/
RUN ./configure && make && make check && make install
But this code is not working because I don't see any effect of workdir. I have checked that configure is creating makefile properly in my system. So it should create the make file in docker also.
Any suggestions on why this code is not working.
you are extract all the files in Tools folder which is in home ,try this:
WORKDIR $HOME/Tools/ViennaRNA-2.4.14
RUN ./configure
RUN make && make check && make install
the problem is WORKDIR ~/Tools/ViennaRNA-2.4.14/ is translated to exactly ~/Tools/ViennaRNA-2.4.14/ which is created a folder named ~ , you may also use $HOME instead
I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]