elm make fails in alpine docker container - docker

I am trying to dockerise my elm application (code is open source), here is my Dockerfile:
# set base image as alpine
FROM alpine:3.11.2 AS builder
# download the elm compiler and extract it to /user/local/bin/elm
RUN wget -O - 'https://github.com/elm/compiler/releases/download/0.19.1/binary-for-linux-64-bit.gz' \
| gunzip -c >/usr/local/bin/elm
# make the elm compiler executable
RUN chmod +x /usr/local/bin/elm
# update remote repositories
RUN apk update
# install nodejs
RUN apk add --update nodejs npm
# install uglifyjs
RUN npm install uglify-js --global
# set the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD
# instructions that follows the WORKDIR instruction.
WORKDIR /app
# remember, our current working directory within the container is /app
# we now copy everything (except stuff listed in .dockerignore)
# from local machine to /app (in the container).
COPY . .
# build elm production code
RUN elm make src/app/Main.elm --optimize --output=elm.js
When I run docker build . --no-cache I get the following error:
ConnectionFailure Network.Socket.getAddrInfo (called with preferred socket type/protocol: AddrInfo {addrFlags = [AI_ADDRCONFIG], addrFamily = AF_UNSPEC, addrSocketType = Stream, addrProtocol = 0, addrAddress = , addrCanonName = }, host name: Just "package.elm-lang.org", service name: Just "443"): does not exist (Try again)
Here is what it looks like:
I don't have any connection issues, plus if I did have any, then you'd think the install of nodejs and uglifyjs would also fail, correct? Yet those install without any problems.
I'm confused and not really sure what I need to do.

Looks like this is an OS level networking problem. An easy hack would be to wrap the make command in an infinite retry loop (in a different script) and run that script instead
#!/usr/bin/env bash
while :
do
elm make src/app/Main.elm --optimize --output=elm.js
[ $? -eq 0 ] && exit # exit if above command is successful
done
And in your Dockerfile, change the last line to
# build elm production code
RUN ./retry.sh

Related

Automatic build of docker container (java backend, angular frontend)

I would like to say that this is my first container and actually my first JAVA app so maybe I will have basic questions so be lenient, please.
I wrote spring boot app and my colleague has written the frontend part for it in angular. What I would like to achieve is to have "one button/one command" in IntelliJ to create a container containing whole app backend and front end.
What I need to do is:
Clone FE from company repository (I am using ssh key now)
Clone BE from GitHub
Build FE
Copy built FE to static folder in java app
Build BE
Create a container running this app
My current solution is to create "builder" container and there build FE and BE and then copy it to "production" container like this:
#BUILDER
FROM alpine AS builder
WORKDIR /src
# add credentials on build
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/ \
&& echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa \
&& echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
&& chmod 600 /root/.ssh/id_rsa
# installing dependencies
RUN apk update && apk upgrade && apk add --update nodejs nodejs-npm \
&& npm install -g #angular/cli \
&& apk add openjdk11 \
&& apk add maven \
&& apk add --no-cache openssh \
&& apk add --no-cache git
#cloning repositories
RUN git clone git#code.siemens.com:apcprague/edge/metal-forming-fe.git
RUN git clone git#github.com:bzumik1/metalForming.git
# builds front end
WORKDIR /src/metal-forming-fe
RUN npm install && ng build
# builds whole java app with front end
WORKDIR /src/metalForming
RUN cp -a /src/metal-forming-fe/dist/metal-forming/. /src/metalForming/src/main/resources/static \
&& mvn install -DskipTests=true
#PRODUCTION CONTAINER
FROM adoptopenjdk/openjdk11:debian-slim
LABEL maintainer jakub.znamenacek#siemens.com
RUN mkdir app
RUN ["chmod", "+rwx", "/app"]
WORKDIR /app
COPY --from=builder /src/metalForming/target/metal_forming-0.0.1-SNAPSHOT.jar .
EXPOSE 4200
RUN java -version
CMD java -jar metal_forming-0.0.1-SNAPSHOT.jar
This works but I takes very long time so I guess this is not correct way how to do it. Could anyone point me in correct direction? I was thinking if there is a way how to make maven to all these steps for me but maybe this is totally off.
Also if you will find any problem in my Dockerfile please let me know as I said this is my first Dockerfile so I could overlook something.
EDITED:
BTW does anyone know how can I get rid of this part:
echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
it adds GitHub to known_hosts (I also need to add a company repository there). It is because when I run git clone it will ask me if I trust this ... and I have to write yes but I don't know how to do it if it is automatically running in docker and I have no option to write there yes. I have tried yes | git clone ... but this is also not working
a few things:
1, if this runs "slow" on the machine than it will run slow inside a container too.
2, remove --no-cache,* you want to cache everything that is static, because next time when you build those commands will not run where there is no change. Once there is change in one command than that command will rerun instead using the builder cache and also all subsequent commands will have to rerun too.
*UPDATE: I have mistaken "apk update --no-cache" with "docker build --no-cache". I stated wrong that "apk add --no-cache" would mean that command is not cached, because this command is cached on docker builder level. However with this parameter you wouldn't need to delete in a later step the /var/cache/apk/ directory to make you image smaller, but that you wouldn't need to do here, because you are already using multi stage build, so this would not affect your final image size.
One more thing to clarify, all statements in Dockerfile are checked if they changed, if they did not than docker builder uses the cached layer for it and won't run that statement. Exception is ADD and COPY commands, here builder also checks the copied, added files if they changed with checksum. Also if a statement is changed or ADD-ed COPY-ed file(s) changed than that statement is re-run and all subsequent statements re-run too, so you want to put your source code copy statemant as much at the end as it is possible
If you want to disable this cache, do "docker build --no-cache ..." this way all the steps will be re-run that is in the Dockerfile.
3, specify WORKDIR at the top once, if you need to switch directory later use this:
RUN cd /someotherdir && mycommand
Also specifying a Subsequent WORKDIR will be relativ to the previous WORKDIR so it will mess up readibilty what is the (probably) sole purpose of WORKDIR statement.
4, Enable BuildKit:
Either declare environment variable
DOCKER_BUILDKIT=1
or add this to /etc/docker/daemon.json
{ "features": { "buildkit": true } }
BuildKit might not help in this case, but if you do more complex Dockerfiles with more stages Buildkit can run those parallel so overall build will be faster.
5, Do not skip tests with DskipTests=true :)
6, as stated in a comment, do not clone the repo inside the image build, you do not need to do that at all. Just put the Dockerfile in the / of the repo, and COPY the repo files with a Dockerfile command:
COPY . .
First dot is the source that is your current directory on your machine, second dot is the target, the working dir inside the image, /src with your Dockerfile. You build the image and publish it, push it to a docker registry so others can pull it and start using it. If you want more complex stuff building and publishing with a help of a server, look up CI/CD techniques.

sh: curl: not found even install curl inside k8s pod

It might be simple question but I could not find the proper solution.
I have a Docker image as below.. The things that I would like to do simply run curl command inside kubernetes pod but I received an error as below.. I could not able to exec via bash also.
$ kubectl exec -ti hub-cronjob-dev-597cc575f-6lfdc -n hub-dev sh
Defaulting container name to hub-cronjob.
Use 'kubectl describe pod/hub-cronjob-dev-597cc575f-6lfdc -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl
sh: curl: not found
Tried with bash
$ kubectl exec -ti cronjob-dev-597cc575f-6lfdc -n hub-dev bash
mand in container: failed to exec in container: failed to start exec "8019bd0d92aef2b09923de78753eeb0c8b60a78619543e4cd27069128a30da92": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
Dockerfile
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
#Install curl
RUN apk --no-cache add curl -> did not work
RUN apk update && apk add curl curl-dev bash -> did not work
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
Your Dockerfile consists of multiple stages, which is also called multi-stage build.
Each FROM statement is a new stage and new image. In your case you have 2 stages:
builder where you build you app and install curl
second stage which copies /usr/src/app from builder stage
In this case second FROM node:12-alpine statement will contain only basic alpine packages, node tools and /usr/src/app which you have copied from the first stage.
If you want to have curl in your final image you need to install curl in second stage (after second FROM node:12-alpine):
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
# Do not install
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
#Install curl
RUN apk update && apk add curl
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
As it was mentioned in comments you can test it by running docker container directly - no need to run pod in k8s cluster:
docker build -t image . && docker run -it image sh -c 'which curl'
It is common to use multi-stage build for applications implemented in compiled programming languages.
In the first stage you install all necessary dev tools and compilers and then compile sources into a binary file. Since you don't need and probably don't want sources and developer's tools in a production image you should create a new stage.
In the second stage you copy compiled binary file and run it as CMD or ENTRYPOINT. This way your image contains only executable code, which makes them smaller.
We can add curl using apk in the k8s pod.
apk add curl

AWS lambda container not picking system dependency

I am Building a lambda to denoise audio files. Python soundfile uses libsndfile system dependency which I am installing via apt in my DockerFile. The container is running fine locally but when I run it after deploying to lambda it says [ "errorMessage": "sndfile library not found",
"errorType": "OSError", ]. Here is my Dockerfile,
# Define global args
ARG FUNCTION_DIR="/home/app"
ARG RUNTIME_VERSION="3.7"
# Stage 1 - bundle base image + runtime
# Grab a fresh copy of the image and install GCC if not installed ( In case of debian its already installed )
FROM python:${RUNTIME_VERSION} AS python-3.7
# Stage 2 - build function and dependencies
FROM python-3.7 AS build-image
# Install aws-lambda-cpp build dependencies ( In case of debian they're already installed )
RUN apt-get update && apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
# Include global args in this stage of the build
ARG FUNCTION_DIR
ARG RUNTIME_VERSION
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function
COPY app/requirements.txt ${FUNCTION_DIR}/app/requirements.txt
# Optional – Install the function's dependencies
RUN python${RUNTIME_VERSION} -m pip install -r ${FUNCTION_DIR}/app/requirements.txt --target ${FUNCTION_DIR}
# Install Lambda Runtime Interface Client for Python
# RUN python${RUNTIME_VERSION} -m pip install awslambdaric --target ${FUNCTION_DIR}
# Stage 3 - final runtime image
# Grab a fresh copy of the Python image
FROM python-3.7
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the built dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
# Install librosa system dependencies
RUN apt-get update -y && apt-get install -y \
libsndfile1 \
ffmpeg
# (Optional) Add Lambda Runtime Interface Emulator and use a script in the ENTRYPOINT for simpler local runs
# ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
# COPY entry.sh /
COPY app ${FUNCTION_DIR}/app
ENV NUMBA_CACHE_DIR=/tmp
RUN ln -s /usr/lib/x86_64-linux-gnu/libsndfile.so.1 /usr/local/bin/libsndfile.so.1
# enable below for local testing
# COPY events ${FUNCTION_DIR}/events
# COPY .env ${FUNCTION_DIR}
# RUN chmod 755 /usr/bin/aws-lambda-rie /entry.sh
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "app.handler.lambda_handler" ]
Below is my lambda config
{
"FunctionName": "DenoiseAudio",
"FunctionArn": "arn:aws:lambda:us-east-1:xxxx:function:DenoiseAudio",
"Role": "arn:aws:iam::xxxx:role/lambda-s3-role",
"CodeSize": 0,
"Description": "",
"Timeout": 120,
"MemorySize": 128,
"LastModified": "2021-01-25T13:41:00.000+0000",
"CodeSha256": "84ae6e6e475cad50ae5176d6176de09e95a74d0e1cfab3df7cf66a41f65e4e19",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "43c6e7c4-27a8-4c6d-8c32-c1e074d40a62",
"State": "Active",
"LastUpdateStatus": "Successful",
"PackageType": "Image",
"ImageConfigResponse": {}
}
It's resolved now. It turns out that the error log was wrong. It was happening because lambda was not assigned enough memory. As soon as I assigned enough memory the problem went away.
It looks like you are not building from one of the AWS managed base images which include the runtime interface client. It also looks like you are not adding it manually as well. My recommendation is to start with the Python3.7 or Python3.8 base image (both found here). Then add the other libraries as needed. I would also encourage you to look at the AWS SAM implementation of container image support for Lambda functions. It will make your life easier :). You can find a video demonstration of it here: https://youtu.be/3tMc5r8gLQ8?t=1390.
If you want still want to build an image from scratch, take a look at the documents that will guide you through the requirements found here.

How to do configure,make and make install in docker build

Problem Statement
I am building a docker of my computational bioinformatics pipeline which contains many tools that will be called at different steps of pipelines. In this process, I am trying to add one tool The ViennaRNA Package which will be downloaded and compliled using source code. I have tried many ways to compile it in docker build (as shown below) but none of them is working.
Failed attempts
Code-1 :
FROM jupyter/scipy-notebook
USER root
MAINTAINER Vivek Ruhela <vivekr#iiitd.ac.in>
# Copy the application folder inside the container
ADD . /test1
# Set the default directory where CMD will execute
WORKDIR /test1
# Set environment variable
ENV HOME /test1
# Install RNAFold
RUN wget https://www.tbi.univie.ac.at/RNA/download/sourcecode/2_4_x/ViennaRNA-2.4.14.tar.gz -P ~/Tools
RUN tar xvzf ~/Tools/ViennaRNA-2.4.14.tar.gz -C ~/Tools
WORKDIR "~/Tools/ViennaRNA-2.4.14/"
RUN ./configure
RUN make && make check && make install
Error : configure file not found
Code-2 :
FROM jupyter/scipy-notebook
USER root
MAINTAINER Vivek Ruhela <vivekr#iiitd.ac.in>
# Copy the application folder inside the container
ADD . /test1
# Set the default directory where CMD will execute
WORKDIR /test1
# Set environment variable
ENV HOME /test1
# Install RNAFold
RUN wget https://www.tbi.univie.ac.at/RNA/download/sourcecode/2_4_x/ViennaRNA-2.4.14.tar.gz -P ~/Tools
RUN tar xvzf ~/Tools/ViennaRNA-2.4.14.tar.gz -C ~/Tools
RUN bash ~/Tools/ViennaRNA-2.4.14/configure
WORKDIR "~/Tools/ViennaRNA-2.4.14/"
RUN make && make check && make install
Error : make: *** No targets specified and no makefile found. Stop.
I also tried another way to tell the file location explicitly e.g.
RUN make -C ~/Tools/ViennaRNA-2.4.14/
Sill this approach is not working.
Expected Procedure
I have installed this tool in my system many times using the standard procedure as mentioned in tool documentation as
./configure
make
make check
make install
Similarly for docker, the following code should work
WORKDIR ~/Tools/ViennaRNA-2.4.14/
RUN ./configure && make && make check && make install
But this code is not working because I don't see any effect of workdir. I have checked that configure is creating makefile properly in my system. So it should create the make file in docker also.
Any suggestions on why this code is not working.
you are extract all the files in Tools folder which is in home ,try this:
WORKDIR $HOME/Tools/ViennaRNA-2.4.14
RUN ./configure
RUN make && make check && make install
the problem is WORKDIR ~/Tools/ViennaRNA-2.4.14/ is translated to exactly ~/Tools/ViennaRNA-2.4.14/ which is created a folder named ~ , you may also use $HOME instead

Changing RUN to CMD stops the container from working

I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]

Resources