I'm trying to use a multistep Dockerfile that uses FROM AS, but when I run the Dockerfile in a Jenkins job I get an error
FROM node:8.12.0-alpine AS firstStep
Error parsing reference: "node:8.12.0-alpine AS firstStep" is not a valid repository/tag: invalid reference format
The Dockerfile is this:
FROM node:8.12.0-alpine AS firstStep
WORKDIR /usr/src/app/
# Copy both the package.json and the package-lock.json
COPY package*.json ./
COPY . .
# Deployment container
FROM nginx:1.14.0-alpine
RUN apk add --no-cache bash
RUN apk add --update curl
#set env var for certs
ENV NODE_EXTRA_CA_CERTS /confs/MyPem.pem
# Forward logs to stdout and stderr
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
# Create nginx config dir and copy nginx files for environments into it
RUN mkdir /confs
COPY ./nginxconf/* /confs/
#This copies the Keystore from the workspace and places it at the root of the container
COPY ./MyPem.pem /confs/MyPem.pem
COPY --from=firstStep /usr/src/app/dist /usr/share/nginx/html
COPY ./entrypoint.sh /opt/entrypoint.sh
RUN chmod a+x /opt/entrypoint.sh
ENTRYPOINT ["/opt/entrypoint.sh"]
If you check in dockerhub.com the image node:8.12.0-alpine indeed does not exists, Use for example "node:8.12-alpine" . Also you should use lowercase for "firstStep" so ... "firststep"
Support for multi-stage builds was added in 17.05.0. You can check the current version of the docker client and server with docker version. You'll need to upgrade the docker engine performing the build. Older docker releases are not supported once a new major release is delivered, so you'll want to pick a current stable release. Follow the installation guide from docker for your platform to install from the docker repos, you'll often find that Linux distributions have older versions of docker in their repos.
Related
I had a docker file that was working fine. However to remote debug it , I read that I needed to install dlv on it and then I need to run dlv and pass the parameter of the app I am trying to debug. So after installing dlv on it and attempting to run it. I get the error
exec /dlv: no such file or directory
This is the docker file
FROM golang:1.18-alpine AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Retrieve application dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -gcflags="all=-N -l" -o fooapp
# Use the official Debian slim image for a lean production container.
FROM debian:buster-slim
EXPOSE 8000 40000
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
#COPY --from=builder /app/fooapp /app/fooapp #commented this out
COPY --from=builder /go/bin/dlv /dlv
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
The above results in exec /dlv: no such file or directory
I am not sure why this is happening. Being new to docker , I have tried different ways to debug it. I tried using dive to check and see if the image has dlv on it in the path /dlv and it does. I have also attached an image of it
You built dlv in alpine-based distro. dlv executable is linked against libc.musl:
# ldd dlv
linux-vdso.so.1 (0x00007ffcd251d000)
libc.musl-x86_64.so.1 => not found
But then you switched to glibc-based image debian:buster-slim. That image doesn't have the required libraries.
# find / -name libc.musl*
<nothing found>
That's why you can't execute dlv - the dynamic linker fails to find the proper lib.
You need to build in glibc-based docker. For example, replace the first line
FROM golang:bullseye AS builder
BTW. After you build you need to run the container in the priviledged mode
$ docker build . -t try-dlv
...
$ docker run --privileged --rm try-dlv
API server listening at: [::]:40000
2022-10-30T10:51:02Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
In non-priviledged container dlv is not allowed to spawn a child process.
$ docker run --rm try-dlv
API server listening at: [::]:40000
2022-10-30T10:55:46Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
could not launch process: fork/exec /app/fooapp: operation not permitted
Really Minimal Image
You use debian:buster-slim to minimize the image, it's size is 80 MB. But if you need a really small image, use busybox, it is only 4.86 MB overhead.
FROM golang:bullseye AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Retrieve application dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -o fooapp .
# Download certificates
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates
# Use the official Debian slim image for a lean production container.
FROM busybox:glibc
EXPOSE 8000 40000
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/fooapp /app/fooapp
# COPY --from=builder /app/ /app
COPY --from=builder /go/bin/dlv /dlv
COPY --from=builder /etc/ssl /etc/ssl
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
# ENTRYPOINT ["/bin/sh"]
The image size is 25 MB, of which 18 MB are from dlv and 2 MB are from Hello World application.
While choosing the images care should be taken to have the same flavors of libc. golang:bullseye links against glibc. Hence, the minimal image must be glibc-based.
But if you want a bit more comfort, use alpine with gcompat package installed. It is a reasonably rich linux with lots of external packages for just extra 6 MB compared to busybox.
FROM golang:bullseye AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Copy local code to the container image.
COPY . ./
# Retrieve application dependencies.
RUN go mod tidy
# Build the binary.
RUN go build -o fooapp .
# Use alpine lean production container.
# FROM busybox:glibc
FROM alpine:latest
# gcompat is the package to glibc-based apps
# ca-certificates contains trusted TLS CA certs
# bash is just for the comfort, I hate /bin/sh
RUN apk add gcompat ca-certificates bash
EXPOSE 8000 40000
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/fooapp /app/fooapp
# COPY --from=builder /app/ /app
COPY --from=builder /go/bin/dlv /dlv
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
# ENTRYPOINT ["/bin/bash"]
TL;DR
Run apt-get install musl, then /dlv should work as expected.
Explanation
Follow these steps:
docker run -it <image-name> sh
apt-get install file
file /dlv
Then you can see the following output:
/dlv: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-x86_64.so.1, Go BuildID=xV8RHgfpp-zlDlpElKQb/DOLzpvO_A6CJb7sj1Nxf/aCHlNjW4ruS1RXQUbuCC/JgrF83mgm55ntjRnBpHH, not stripped
The confusing no such file or directory (see this question for related discussions) is caused by the missing /lib/ld-musl-x86_64.so.1.
As a result, the solution is to install the musl library by following its documentation.
My answer is inspired by this answer.
The no such file or directory error indicates either your binary file does not exist, or your binary is dynamically linked to a library that does not exist.
As said in this answer, delve is linked against libc.musl. So for your delve build, you can disable CGO since that can result in dynamic links to libc/libmusl:
# Build Delve for debugging
RUN CGO_ENABLED=0 go install github.com/go-delve/delve/cmd/dlv#latest
...
This even allows you later to use a scratch build for your final target image and does not require you to install any additional packages like musl or use any glibc based image and require you to run in privileged mode.
What permissions do I need?
I created an instance with the gcloud compute instance create-with-container command.
Then, the following was displayed in the log.
Error: Failed to start container: Error response from daemon:
{\" message \ ": \" OCI runtime create failed: container_linux.go: 349: starting container process caused \\\ "exec: \\\\\\\ "./foo\\\\\\\": permission denied \\\ ": unknown \"}
./foo is executable.
image is built by google cloud build.
Dockerfile
# Use the offical golang image to create a binary.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.15-buster as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -mod=readonly -v -o foo
# Use the official Debian slim image for a lean production container.
# https://hub.docker.com/_/debian
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM debian:buster-slim
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/foo /app/foo
# Run the task on container startup.
WORKDIR /app
CMD ["./foo"]
this works fine on cloud run.
Fix build command
go build -o foo
Original build command generate foo directory and output execution file in it.
I am trying to containerize and as well as start my Go lang application using Docker-compose,
The image is built successfully according to the logs but my container does not for docker-compose up and it throws the following error to my console.
Cannot start service app: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"go\": executable file not found in $PATH": unknown
Here is what my Docker file looks like.
ARG GO_VERSION=1.13
FROM golang:${GO_VERSION}-alpine AS builder
# We create an /app directory within our
# image that will hold our application source
# files
RUN mkdir /raedar
# Create the user and group files that will be used in the running container to
# run the process as an unprivileged user.
RUN mkdir /user && \
echo 'nobody:x:65534:65534:nobody:/:' > /user/passwd && \
echo 'nobody:x:65534:' > /user/group
# Install git.
# Git is required for fetching the dependencies.
# Allow Go to retrieve the dependencies for the buld
RUN apk update && apk add --no-cache ca-certificates git
RUN apk add --no-cache libc6-compat
# Force the go compiler to use modules
ENV GO111MODULE=on
ADD . /raedar/
WORKDIR /raedar/
RUN go get -d -v golang.org/x/net/html
COPY go.mod go.sum ./
COPY . .
# Compile the binary, we don't want to run the cgo
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o bin/main cmd/app/main.go
# Final stage: the running container.
FROM scratch AS final
WORKDIR /root/
# Import the user and group files from the first stage.
COPY --from=builder /user/group /user/passwd /etc/
# Import the Certificate-Authority certificates for enabling HTTPS.
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Import the compiled executable from the first stage.
COPY --from=builder /raedar/bin/main .
EXPOSE 8080
# Perform any further action as an unprivileged user.
USER nobody:nobody
# Run the compiled binary.
CMD ["./main"]
The error shows you're trying to run go, not ./main:
exec: \"go\": executable file not found in $PATH
A matching Dockerfile would have the line CMD ["go", "./main"] rather than CMD ["./main"].
So either you're unexpectedly building a different Dockerfile, or you're changing the command when you run a container with that image. In particular, if you're using docker-compose, make sure you're not setting command: go ./main or entrypoint: go, either of which could cause this behavior.
I am currently running into a problem trying to set up nginx:alpine in Openshift.
My build runs just fine but I am not able to deploy with permission being denied with the following error
2019/01/25 06:30:54 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
Now I know Openshift is a bit tricky when it comes to permissions as the container is running without root privilidges and the UID is gerenated on runetime which means it's not available in /etc/passwd. But the user is part of the group root. Now how this is supposed to be handled is being described here
https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines
I even went further and made the whole /var completely accessible (777) for testing purposes but I still get the error. This is what my Dockerfile looks like
Dockerfile
FROM nginx:alpine
#Configure proxy settings
ENV HTTP_PROXY=http://my.proxy:port
ENV HTTPS_PROXY=http://my.proxy:port
ENV HTTP_PROXY_AUTH=basic:*:username:password
WORKDIR /app
COPY . .
# Install node.js
RUN apk update && \
apk add nodejs npm python make curl g++
# Build Application
RUN npm install
RUN ./node_modules/#angular/cli/bin/ng build
COPY ./dist/my-app /usr/share/nginx/html
# Configure NGINX
COPY ./openshift/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./openshift/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx && \
chmod -R 777 /var
RUN sed -i.bak 's/^user/#user/' /etc/nginx/nginx.conf
EXPOSE 8080
It's funny that this approach just seems to effekt the alpine version of nginx. nginx:latest (based on debian I think) has no issues and the way to set it up described here
https://torstenwalter.de/openshift/nginx/2017/08/04/nginx-on-openshift.html
works. (but i am having some other issues with that build so I switched to alpine)
Any ideas why this is still not working?
I was using openshift, with limited permissions, so I fixed this problem by using the following nginx image (rather than nginx:latest)
FROM nginxinc/nginx-unprivileged
To resolve this. I think the Problem in this Dockerfile was that I used the COPY command to move my build and that did not exist. So here is my working
Dockerfile
FROM nginx:alpine
LABEL maintainer="ReliefMelone"
WORKDIR /app
COPY . .
# Install node.js
RUN apk update && \
apk add nodejs npm python make curl g++
# Build Application
RUN npm install
RUN ./node_modules/#angular/cli/bin/ng build --configuration=${BUILD_CONFIG}
RUN cp -r ./dist/. /usr/share/nginx/html
# Configure NGINX
COPY ./openshift/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./openshift/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx && \
chmod -R 770 /var/cache/nginx /var/run /var/log/nginx
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
Note that under the Build Application section I now do
RUN cp -r ./dist/. /usr/share/nginx/html
instead of
COPY ./dist/my-app /usr/share/nginx/html
The copy will not work as I previously ran the ng build inside of the container the dist will only exist in the container as well, so I need to execute the copy command inside of that container
Had the same error on my nginx:alpine Dockerfile
There is already a user called nginx in the nginx:alpine image. My guess is that it's cleaner to use it to run nginx.
Here is how I resolved it:
Set the owner of /var/cache/nginx to nginx (user 101, group 101)
Create a /var/run/nginx.pid and set the owner to nginx as well
Copy all the files to the image using --chown=nginx:nginx
FROM nginx:alpine
RUN touch /var/run/nginx.pid && \
chown -R nginx:nginx /var/cache/nginx /var/run/nginx.pid
USER nginx
COPY --chown=nginx:nginx my/html/files /usr/share/nginx/html
COPY --chown=nginx:nginx config/myapp/default.conf /etc/nginx/conf.d/default.conf
...
If you're here because you failed to deploy an example helm chart (e.g: helm create mychart), do just like #quasipolynomial suggested but instead change your deployment file pull the right image.
i.e
containters:
- image: nginxinc/nginx-unprivileged
more info on the official unprivileged image: https://github.com/nginxinc/docker-nginx-unprivileged
You may change the folder using the nginx.conf file. You can read more information in the section Running nginx as a non-root user.
May or may not be a step in the right direction (especially helpful for those who came here looking for general help on the [emerg] mkdir() ... failed error).
This solution counts from Builing nginx from source.
It took me about seven hours to realize the solution is directly related to the prefix path set in compiling nginx.
This is where my configuration throws off nginx (as a very brief example), compiled from this nginx source:
sudo ./auto/configure \
--prefix=/usr/local/nginx \
--http-client-body-temp-path=/tmp/nginx/client-body-temp \
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp
Without realizing it, I was setting the prefix to /usr/local/nginx but setting the client body temp path & fastcgi temp path to a directory inside /tmp/nginx.
It's basically breaking nginx's ability to access the correct files, because the temp paths are not correlated to the prefix path.
So I fixed it by (again, super simple configure as an example):
sudo ./auto/configure \
--prefix=/usr/local/nginx \
--http-client-body-temp-path=/usr/local/nginx/client_body_temp \
--http-fastcgi-temp-path=/usr/local/nginx/fastcgi_temp \
Further simplified:
sudo ./auto/configure \
--prefix=/usr/local/nginx \
--http-client-body-temp-path=/client_body_temp \
--http-fastcgi-temp-path=/fastcgi_temp \
Again, not guaranteed to work, but definitely a step in the right direction.
run the below command to fix the above issue. The anyuid security context constraint required.
oc adm policy add-scc-to-user anyuid system:serviceaccount:<NAMESPACE>:default
I've got a repo set up like this:
/config
config.json
/worker-a
Dockerfile
<symlink to config.json>
/code
/worker-b
Dockerfile
<symlink to config.json>
/code
However, building the images fails, because Docker can't handle the symlinks. I should mention my project is far more complicated than this, so restructuring directories isn't a great option. How do I deal with this situation?
Docker doesn't support symlinking files outside the build context.
Here are some different methods for using a shared file in a container:
Build Time
Copy from a config image (Docker buildkit)
Recent versions of Docker allow RUN steps to bind mount from a named image or previous build stage with the --mount=type=bind,target=/dir,source=/dir,from=image-or-stage-name
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
FROM scratch
COPY config.json /config.json
Build and tag the config image me/worker-config
docker build -t me/worker-config:latest .
Mount the me/worker-config image during the real build
RUN --mount=type=bind,target=/worker-config,source=/,from=me/worker-config:latest \
cp /worker-config/config.json /app/config.json;
Share a base image
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
COPY config.json /config.json
Build and tag the image me/worker-config
docker build -t me/worker-config:latest .
Source the base me/worker-config image for all your worker Dockerfiles
FROM me/worker-config:latest
Build script
Use a script to push the common config to each of your worker containers.
./build worker-n
#!/bin/sh
set -uex
rundir=$(readlink -f "${0%/*}")
container=$(shift)
cd "$rundir/$container"
cp ../config/config.json ./config-docker.json
docker build "$#" .
Build from URL
Pull the config from a common URL for all worker-n builds.
ADD http://somehost/config.json /
Increase the scope of the image build context
Include the symlink target files in the build context by building from a parent directory that includes both the shared files and specific container files.
cd ..
docker build -f worker-a/Dockerfile .
All the source paths you reference in a Dockerfile must also change to match the new build context:
COPY workerathing /app
becomes
COPY worker-a/workerathing /app
Using this method can make all build contexts large if you have one large build context, as they all become shared. It can slow down builds, especially to remote Docker build servers. Note that only the .dockerignore file from the base of the build context is referenced.
Alternate build that can mount volumes
Other projects that strive for Dockerfile compatibility may support volumes at build time. For example a podman build / buildah support a --volume option to bind mount files from the host into a build container.
podman build --volume /project/config:/worker-config:ro,Z -t me/worker-a .
Then the build can reference the mounted volume
COPY /worker-config/config.json /app
Run time
Mount a config directory from a named volume
Volumes like this only work as directories, so you can't specify a file like you could when mounting a file from the host to container.
docker volume create --name=worker-cfg-vol
docker run -v worker-cfg-vol:/config worker-config cp config.json /config
docker run -v worker-cfg-vol:/config:/config worker-a
Mount config directory from data container
Again, directories only as it's basically the same as above. This will automatically copy files from the destination directory into the newly created shared volume though.
docker create --name wcc -v /config worker-config /bin/true
docker run --volumes-from wcc worker-a
Mount config file from host at runtime
docker run -v /app/config/config.json:/config.json worker-a
Node.js-specific solution
I also ran into this problem, and would like to share another method that hasn't been mentioned above. Instead of using npm link in my Dockerfile, I used yalc.
Install yalc in your container, e.g. RUN npm i -g yalc.
Build your library in Docker, and run yalc publish (add the --private flag if your shared lib is private). This will 'publish' your library locally.
Run yalc add my-lib in each repo that would normally use npm link before running npm install. It will create a local .yalc folder in your Docker container, create a symlink in node_modules that works inside Docker to this folder, and rewrite your package.json to refer to this folder too, so you can safely run install.
Optionally, if you do a two stage build, make sure that you also copy the .yalc folder to your final image.
Below an example Dockerfile, assuming you have a mono repository with three packages: models, gui and server, and the models repository must be shared and named my-models.
# You can access the container using:
# docker run -it my-name sh
# To start it stand-alone:
# docker run -it -p 8888:3000 my-name
FROM node:alpine AS builder
# Install yalc globally (the apk add... line is only needed if your installation requires it)
RUN apk add --no-cache --virtual .gyp python make g++ && \
npm i -g yalc
RUN mkdir /packages && \
mkdir /packages/models && \
mkdir /packages/gui && \
mkdir /packages/server
COPY ./packages/models /packages/models
WORKDIR /packages/models
RUN npm install && \
npm run build && \
yalc publish --private
COPY ./packages/gui /packages/gui
WORKDIR /packages/gui
RUN yalc add my-models && \
npm install && \
npm run build
COPY ./packages/server /packages/server
WORKDIR /packages/server
RUN yalc add my-models && \
npm install && \
npm run build
FROM node:alpine
RUN mkdir -p /app
COPY --from=builder /packages/server/package.json /app/package.json
COPY --from=builder /packages/server/dist /app/dist
# Make sure you copy the yalc registry too.
COPY --from=builder /packages/server/.yalc /app/.yalc
COPY --from=builder /packages/server/node_modules /app/node_modules
COPY --from=builder /packages/gui/dist /app/dist/public
WORKDIR /app
EXPOSE 3000
CMD ["node", "./dist/index.js"]
Hope that helps...
The docker build CLI command sends the specified directory (typically .) as the "build context" to the Docker Engine (daemon). Instead of specifying the build context as /worker-a, specify the build context as the root directory, and use the -f argument to specify the path to the Dockerfile in one of the child directories.
docker build -f worker-a/Dockerfile .
docker build -f worker-b/Dockerfile .
You'll have to rework your Dockerfiles slightly, to point them to ../config/config.json, but that is pretty trivial to fix.
Also check out this question/answer, which I think addresses the exact same problem that you're experiencing.
How to include files outside of Docker's build context?
Hope this helps! Cheers
An alternative solution is to upgrade all your soft links into hard links.