Cannot install private dependency from artifact registry inside docker build - docker

I am trying to install a private python package that was uploaded to an artifact registry inside a docker container (to deploy it on cloudrun).
I have sucessfully used that package in a cloud function in the past, so I am sure the package works.
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/${_PROJECT}/${_SERVICE_NAME}:$SHORT_SHA', '--network=cloudbuild', '.', '--progress=plain']
Dockerfile
FROM python:3.8.6-slim-buster
ENV APP_PATH=/usr/src/app
ENV PORT=8080
# Copy requirements.txt to the docker image and install packages
RUN apt-get update && apt-get install -y cython
RUN pip install --upgrade pip
# Set the WORKDIR to be the folder
RUN mkdir -p $APP_PATH
COPY / $APP_PATH
WORKDIR $APP_PATH
RUN pip install -r requirements.txt --no-color
RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3 # This line is where the bug occurs
# Expose port
EXPOSE $PORT
# Use gunicorn as the entrypoint
CMD exec gunicorn --bind 0.0.0.0:8080 app:app
The permissions I added are:
cloudbuild default service account (project-number#cloudbuild.gserviceaccount.com): Artifact Registry Reader
service account running the cloudbuild : Artifact Registry Reader
service account running the app: Artifact Registry Reader
The cloudbuild error:
Step 10/12 : RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3
---> Running in b2ead00ccdf4
Looking in indexes: https://pypi.org/simple, https://us-west1-python.pkg.dev/muse-speech-devops/gcp-utils/simple/
User for us-west1-python.pkg.dev: [91mERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 167, in exc_logging_wrapper
status = run_func(*args)
File "/usr/local/lib/python3.8/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 340, in run
requirement_set = resolver.resolve(
File "/usr/local/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 94, in resolve
result = self._result = resolver.resolve(
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria
if not criterion.candidates:
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__

From your traceback log, we can see that Cloud Build doesn't have the credentials to authenticate to the private repo:
Step 10/12 : RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3
---> Running in b2ead00ccdf4
Looking in indexes: https://pypi.org/simple, https://us-west1-python.pkg.dev/muse-speech-devops/gcp-utils/simple/
User for us-west1-python.pkg.dev: [91mERROR: Exception: //<-ASKING FOR USERNAME
I uploaded a simple package to a private Artifact Registry repo to test this out when building a container and also received the same message. Since you seem to be authenticating with a service account key, the username and password will need to be stored inside pip.conf:
pip.conf
[global]
extra-index-url = https://_json_key_base64:KEY#LOCATION-python.pkg.dev/PROJECT/REPOSITORY/simple/
This file therefore needs to be available during the build process. Multi-stage docker builds are very useful here to ensure the configuration keys are not exposed, since we can choose what files make it into the final image (configuration keys would only be present while used to download the packages from the private repo):
Sample Dockerfile
# Installing packages in a separate image
FROM python:3.8.6-slim-buster as pkg-build
# Target Python environment variable to bind to pip.conf
ENV PIP_CONFIG_FILE /pip.conf
WORKDIR /packages/
COPY requirements.txt /
# Copying the pip.conf key file only during package downloading
COPY ./config/pip.conf /pip.conf
# Packages are downloaded to the /packages/ directory
RUN pip download -r /requirements.txt
RUN pip download --extra-index-url https://LOCATION-python.pkg.dev/PROJECT/REPO/simple/ PACKAGES
# Final image that will be deployed
FROM python:3.8.6-slim-buster
ENV PYTHONUNBUFFERED True
ENV APP_HOME /app
WORKDIR /packages/
# Copying ONLY the packages from the previous build
COPY --from=pkg-build /packages/ /packages/
# Installing the packages from the copied files
RUN pip install --no-index --find-links=/packages/ /packages/*
WORKDIR $APP_HOME
COPY ./src/main.py ./
# Executing sample flask web app
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
I based the dockerfile above on this related thread, and I could confirm the packages were correctly downloaded from my private Artifact Registry repo, and also that the pip.conf file was not present in the resulting image.

Related

Poetry install throws "Connection pool is full, discarding connection: pypi.org. Connection pool size: 10" error when building Docker image

I cannot build my docker image. It throws "Connection is full" error when installing dependencies via Poetry. This does not happen on my host machine. How can I solve this. Do I need to increase the pool size? If yes, how?
My Dockerfile
FROM python:3.10-alpine AS python
ENV PYTHONUNBUFFERED=true
WORKDIR /app
FROM python as poetry
ENV POETRY_HOME=/opt/poetry
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV PATH="$POETRY_HOME/bin:$PATH"
RUN python -c 'from urllib.request import urlopen; print(urlopen("https://install.python-poetry.org").read().decode())' | python -
COPY pyproject.toml poetry.lock ./
RUN poetry install --no-interaction --no-ansi -vvv
Using your Dockerfile with my project I added a line before the last one as follows:
RUN poetry config installer.max-workers 10
RUN poetry install --no-interaction --no-ansi -vvv
It worked for me!

Dockerfile issue - Why is the binary dlv not being found - No such file or directory

I had a docker file that was working fine. However to remote debug it , I read that I needed to install dlv on it and then I need to run dlv and pass the parameter of the app I am trying to debug. So after installing dlv on it and attempting to run it. I get the error
exec /dlv: no such file or directory
This is the docker file
FROM golang:1.18-alpine AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Retrieve application dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -gcflags="all=-N -l" -o fooapp
# Use the official Debian slim image for a lean production container.
FROM debian:buster-slim
EXPOSE 8000 40000
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
#COPY --from=builder /app/fooapp /app/fooapp #commented this out
COPY --from=builder /go/bin/dlv /dlv
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
The above results in exec /dlv: no such file or directory
I am not sure why this is happening. Being new to docker , I have tried different ways to debug it. I tried using dive to check and see if the image has dlv on it in the path /dlv and it does. I have also attached an image of it
You built dlv in alpine-based distro. dlv executable is linked against libc.musl:
# ldd dlv
linux-vdso.so.1 (0x00007ffcd251d000)
libc.musl-x86_64.so.1 => not found
But then you switched to glibc-based image debian:buster-slim. That image doesn't have the required libraries.
# find / -name libc.musl*
<nothing found>
That's why you can't execute dlv - the dynamic linker fails to find the proper lib.
You need to build in glibc-based docker. For example, replace the first line
FROM golang:bullseye AS builder
BTW. After you build you need to run the container in the priviledged mode
$ docker build . -t try-dlv
...
$ docker run --privileged --rm try-dlv
API server listening at: [::]:40000
2022-10-30T10:51:02Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
In non-priviledged container dlv is not allowed to spawn a child process.
$ docker run --rm try-dlv
API server listening at: [::]:40000
2022-10-30T10:55:46Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
could not launch process: fork/exec /app/fooapp: operation not permitted
Really Minimal Image
You use debian:buster-slim to minimize the image, it's size is 80 MB. But if you need a really small image, use busybox, it is only 4.86 MB overhead.
FROM golang:bullseye AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Retrieve application dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -o fooapp .
# Download certificates
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates
# Use the official Debian slim image for a lean production container.
FROM busybox:glibc
EXPOSE 8000 40000
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/fooapp /app/fooapp
# COPY --from=builder /app/ /app
COPY --from=builder /go/bin/dlv /dlv
COPY --from=builder /etc/ssl /etc/ssl
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
# ENTRYPOINT ["/bin/sh"]
The image size is 25 MB, of which 18 MB are from dlv and 2 MB are from Hello World application.
While choosing the images care should be taken to have the same flavors of libc. golang:bullseye links against glibc. Hence, the minimal image must be glibc-based.
But if you want a bit more comfort, use alpine with gcompat package installed. It is a reasonably rich linux with lots of external packages for just extra 6 MB compared to busybox.
FROM golang:bullseye AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Copy local code to the container image.
COPY . ./
# Retrieve application dependencies.
RUN go mod tidy
# Build the binary.
RUN go build -o fooapp .
# Use alpine lean production container.
# FROM busybox:glibc
FROM alpine:latest
# gcompat is the package to glibc-based apps
# ca-certificates contains trusted TLS CA certs
# bash is just for the comfort, I hate /bin/sh
RUN apk add gcompat ca-certificates bash
EXPOSE 8000 40000
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/fooapp /app/fooapp
# COPY --from=builder /app/ /app
COPY --from=builder /go/bin/dlv /dlv
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
# ENTRYPOINT ["/bin/bash"]
TL;DR
Run apt-get install musl, then /dlv should work as expected.
Explanation
Follow these steps:
docker run -it <image-name> sh
apt-get install file
file /dlv
Then you can see the following output:
/dlv: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-x86_64.so.1, Go BuildID=xV8RHgfpp-zlDlpElKQb/DOLzpvO_A6CJb7sj1Nxf/aCHlNjW4ruS1RXQUbuCC/JgrF83mgm55ntjRnBpHH, not stripped
The confusing no such file or directory (see this question for related discussions) is caused by the missing /lib/ld-musl-x86_64.so.1.
As a result, the solution is to install the musl library by following its documentation.
My answer is inspired by this answer.
The no such file or directory error indicates either your binary file does not exist, or your binary is dynamically linked to a library that does not exist.
As said in this answer, delve is linked against libc.musl. So for your delve build, you can disable CGO since that can result in dynamic links to libc/libmusl:
# Build Delve for debugging
RUN CGO_ENABLED=0 go install github.com/go-delve/delve/cmd/dlv#latest
...
This even allows you later to use a scratch build for your final target image and does not require you to install any additional packages like musl or use any glibc based image and require you to run in privileged mode.

What permissions do I need to run executable with container on gce

What permissions do I need?
I created an instance with the gcloud compute instance create-with-container command.
Then, the following was displayed in the log.
Error: Failed to start container: Error response from daemon:
{\" message \ ": \" OCI runtime create failed: container_linux.go: 349: starting container process caused \\\ "exec: \\\\\\\ "./foo\\\\\\\": permission denied \\\ ": unknown \"}
./foo is executable.
image is built by google cloud build.
Dockerfile
# Use the offical golang image to create a binary.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.15-buster as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -mod=readonly -v -o foo
# Use the official Debian slim image for a lean production container.
# https://hub.docker.com/_/debian
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM debian:buster-slim
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/foo /app/foo
# Run the task on container startup.
WORKDIR /app
CMD ["./foo"]
this works fine on cloud run.
Fix build command
go build -o foo
Original build command generate foo directory and output execution file in it.

sh: curl: not found even install curl inside k8s pod

It might be simple question but I could not find the proper solution.
I have a Docker image as below.. The things that I would like to do simply run curl command inside kubernetes pod but I received an error as below.. I could not able to exec via bash also.
$ kubectl exec -ti hub-cronjob-dev-597cc575f-6lfdc -n hub-dev sh
Defaulting container name to hub-cronjob.
Use 'kubectl describe pod/hub-cronjob-dev-597cc575f-6lfdc -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl
sh: curl: not found
Tried with bash
$ kubectl exec -ti cronjob-dev-597cc575f-6lfdc -n hub-dev bash
mand in container: failed to exec in container: failed to start exec "8019bd0d92aef2b09923de78753eeb0c8b60a78619543e4cd27069128a30da92": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
Dockerfile
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
#Install curl
RUN apk --no-cache add curl -> did not work
RUN apk update && apk add curl curl-dev bash -> did not work
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
Your Dockerfile consists of multiple stages, which is also called multi-stage build.
Each FROM statement is a new stage and new image. In your case you have 2 stages:
builder where you build you app and install curl
second stage which copies /usr/src/app from builder stage
In this case second FROM node:12-alpine statement will contain only basic alpine packages, node tools and /usr/src/app which you have copied from the first stage.
If you want to have curl in your final image you need to install curl in second stage (after second FROM node:12-alpine):
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
# Do not install
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
#Install curl
RUN apk update && apk add curl
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
As it was mentioned in comments you can test it by running docker container directly - no need to run pod in k8s cluster:
docker build -t image . && docker run -it image sh -c 'which curl'
It is common to use multi-stage build for applications implemented in compiled programming languages.
In the first stage you install all necessary dev tools and compilers and then compile sources into a binary file. Since you don't need and probably don't want sources and developer's tools in a production image you should create a new stage.
In the second stage you copy compiled binary file and run it as CMD or ENTRYPOINT. This way your image contains only executable code, which makes them smaller.
We can add curl using apk in the k8s pod.
apk add curl

starting container process caused "exec: \"go\": executable file not found in $PATH": unknown

I am trying to containerize and as well as start my Go lang application using Docker-compose,
The image is built successfully according to the logs but my container does not for docker-compose up and it throws the following error to my console.
Cannot start service app: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"go\": executable file not found in $PATH": unknown
Here is what my Docker file looks like.
ARG GO_VERSION=1.13
FROM golang:${GO_VERSION}-alpine AS builder
# We create an /app directory within our
# image that will hold our application source
# files
RUN mkdir /raedar
# Create the user and group files that will be used in the running container to
# run the process as an unprivileged user.
RUN mkdir /user && \
echo 'nobody:x:65534:65534:nobody:/:' > /user/passwd && \
echo 'nobody:x:65534:' > /user/group
# Install git.
# Git is required for fetching the dependencies.
# Allow Go to retrieve the dependencies for the buld
RUN apk update && apk add --no-cache ca-certificates git
RUN apk add --no-cache libc6-compat
# Force the go compiler to use modules
ENV GO111MODULE=on
ADD . /raedar/
WORKDIR /raedar/
RUN go get -d -v golang.org/x/net/html
COPY go.mod go.sum ./
COPY . .
# Compile the binary, we don't want to run the cgo
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o bin/main cmd/app/main.go
# Final stage: the running container.
FROM scratch AS final
WORKDIR /root/
# Import the user and group files from the first stage.
COPY --from=builder /user/group /user/passwd /etc/
# Import the Certificate-Authority certificates for enabling HTTPS.
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Import the compiled executable from the first stage.
COPY --from=builder /raedar/bin/main .
EXPOSE 8080
# Perform any further action as an unprivileged user.
USER nobody:nobody
# Run the compiled binary.
CMD ["./main"]
The error shows you're trying to run go, not ./main:
exec: \"go\": executable file not found in $PATH
A matching Dockerfile would have the line CMD ["go", "./main"] rather than CMD ["./main"].
So either you're unexpectedly building a different Dockerfile, or you're changing the command when you run a container with that image. In particular, if you're using docker-compose, make sure you're not setting command: go ./main or entrypoint: go, either of which could cause this behavior.

Resources