aarch64-alpine-linux-musl/bin/ld: cannot find -lpq - docker

when I build the rust app using the alpine as the base image using this command:
docker build -f ./Dockerfile -t="reddwarf-pro/reddwarf-admin:v.1.0.0" .
show error like this:
#14 222.9 = note: /usr/lib/gcc/aarch64-alpine-linux-musl/10.3.1/../../../../aarch64-alpine-linux-musl/bin/ld: cannot find -lpq
#14 222.9 collect2: error: ld returned 1 exit status
#14 222.9
#14 222.9
#14 223.0 warning: `reddwarf-admin` (bin "reddwarf-admin") generated 1 warning
#14 223.0 error: could not compile `reddwarf-admin` due to previous error; 1 warning emitted
------
executor failed running [/bin/sh -c cargo build --release]: exit code: 101
I have already add the libpq in the alpine image, why still show this error? what should I do to fix this problem? This is the Dockerfile:
# to reduce the docker image size
# build stage
FROM rust:1.54-alpine as builder
WORKDIR /app
COPY . /app
RUN rustup default stable
RUN apk update && apk add --no-cache libpq musl-dev pkgconfig openssl-dev gcc
RUN cargo build --release
# RUN cargo build
# Prod stage
FROM alpine:3.15
WORKDIR /app
ENV ROCKET_ADDRESS=0.0.0.0
# ENV ROCKET_PORT=11014
RUN apk update && apk add --no-cache libpq curl
COPY --from=builder /app/.env /app
COPY --from=builder /app/settings.toml /app
COPY --from=builder /app/target/release/reddwarf-admin /app/
COPY --from=builder /app/Rocket.toml /app
CMD ["./reddwarf-admin"]
I am also tried to add libpq-dev but it seems the alpine did not contains this library.

Perhaps slightly confusingly, the dev version is called postgresql-dev and not libpq-dev in v3.14. (From what I can tell, it was renamed libpq-dev in v3.15).
libpq does indeed install the shared library, but postgresql-dev creates the symbolic link /usr/lib/libpq.so -> libpq.so.5.13 which makes linking succeed.
From man ld:
On systems which support shared libraries, ld may also
search for files other than libnamespec.a. Specifically, on
ELF and SunOS systems, ld will search a directory for a
library called libnamespec.so before searching for one
called libnamespec.a. (By convention, a ".so" extension
indicates a shared library.) Note that this behavior does
not apply to :filename, which always specifies a file called
filename.
That is, the -lname syntax will only search for libname.(a|so), not versioned names.

The dependency you need is:
libpq-dev

Related

# pkg-config --cflags -- MagickWand MagickCore MagickWand MagickCore | Docker | golang:alpine AS build

Am new to golang,
and I need to generate a thumbnail from pdf when I upload it.
am using ImageMagick with docker and golang.
But am getting challenged on building docker image including ImageMagick
Docker set up
FROM golang:alpine AS build
RUN apk --no-cache add gcc g++ make git
WORKDIR /go/src/app
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
RUN GOOS=linux go build -ldflags="-s -w" -o ./bin/web-app ./main.go
FROM alpine:3.13
RUN apk --no-cache add ca-certificates
WORKDIR /usr/bin
COPY --from=build /go/src/app/bin /go/bin
EXPOSE 1000
ENTRYPOINT /go/bin/web-app --port 1000
when I build my docker, am getting this error.
Please kindly help out on this.
Thank you.

got pip error while trying to convert an existing docker file to use distroless image

I have a dockerfile in which i am using python:3.9.2-slim-buster as base image and i am doing the following stuff.
FROM lab.com:5000/python:3.9.2-slim-buster
ENV PYTHONPATH=base_platform_update
RUN apt-get update && apt-get install -y curl && apt-get clean
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin
WORKDIR /script
RUN pip install SomePackage
COPY base_platform_update ./base_platform_update
ENTRYPOINT ["python3", "base_platform_update/core/main.py"]
I want to convert this to use distroless image. I tried but its not working. I found these resources
https://github.com/GoogleContainerTools/distroless/blob/main/examples/python3/Dockerfile
https://www.abhaybhargav.com/stories-of-my-experiments-with-distroless-containers/
I know this is not correct but this is what i came up with after following these resources
# first stage
FROM lab.com:5000/python:3.9.2-slim-buster AS build-env
WORKDIR /script
COPY base_platform_update ./base_platform_update
RUN apt-get update && apt-get install -y curl && apt-get clean
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN mv ./kubectl /usr/local/bin
# second stage
FROM gcr.io/distroless/python3
WORKDIR /script
COPY --from=build-env /script/base_platform_update ./base_platform_update
COPY --from=build-env /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY --from=build-env /bin/chmod /bin/chmod
COPY --from=build-env /usr/local/bin/pip /usr/local/bin/pip
RUN chmod +x /usr/local/bin/kubectl
ENV PYTHONPATH=base_platform_update
RUN pip install SomePackage
ENTRYPOINT ["python3", "base_platform_update/core/main.py"]
it gives the following error:
/bin/sh: 1: pip: not found
The command '/bin/sh -c pip install SomePackage' returned a non-zero code: 127
I also thought of moving RUN pip install SomePackage to first stage but the couldn't figure out how to do that.
Any help would be appreciated. Thanks
EDIT:
docker images output
gcr.io/distroless/python3 latest 7f711ebcfe29 51 years ago 52.2MB
gcr.io/distroless/python3 debug 7c587fbe3d02 51 years ago 53.3MB
It could be that you need to add that dir to the PATH.
ENV PATH="/usr/local/bin:$PATH"
consider though the final image size difference after adding all those dependencies, it might not be worth all the hassle.
the latest image tagged as python:3.8.5-alpine is 42.7MB while gcr.io/distroless/python3 as of writing this is 52.2MB, after adding the binaries, the script, and nonetheless the package you want to install you may surpass that figure at the end. If pull time is important and network bandwidth usage is expensive that might be a thought to have, otherwise for the current use case seems like too much.
Distroless images are meant only for runtime, as a result, you can't (by default) use the python package manager to install packages, see Google GitHub project readme
"Distroless" images contain only your application and its runtime
dependencies. They do not contain package managers, shells or any
other programs you would expect to find in a standard Linux
distribution.
you could install the packages in a second new stage and copy the installed packages from it to the third but that's not bound to work cause of target OS the package was meant for, incompatibility between the second and third stage etc`.
Here's an exame Dockerfile for that:
# first stage
FROM python:3.8 AS builder
COPY requirements.txt .
# install dependencies to the local user directory (eg. /root/.local)
RUN pip install --user -r requirements.txt
# second unnamed stage
FROM python:3.8-slim
WORKDIR /code
# copy only the dependencies installation from the 1st stage image
COPY --from=builder /root/.local /root/.local
COPY ./src .
# update PATH environment variable
ENV PATH=/root/.local:$PATH
CMD [ "python", "./server.py" ]
Dockerfile credits
You could package your application to a binary using any number of python libs but that depends on how much you need it. You can do that with packages like pyinstaller though it mainly packages the project rather than turning it to a single binary, nuitka which is a rising option and very popular along with cx_Freeze.
Here's a relevant thread on the topic if you're interested.
There's also this article.

Create image using docker for rust application

I have created the rust application, i would like to dockerize my application. Below is my Dockerfile code, it is from reference. I am having trouble in creating image, I am getting error as below mention.In my local my-app project folder, I have cargo.toml which does not contain any package name its contains only work-space below is reference. Please help on this.
error: failed to read /home/rust/src/my-app/config/Cargo.toml
FROM ekidd/rust-musl-builder:stable as builder
RUN USER=root cargo new --bin my-app
WORKDIR ./my-app
COPY ./Cargo.lock ./Cargo.lock
COPY ./Cargo.toml ./Cargo.toml
RUN cargo build --release
RUN rm src/*.rs
ADD . ./
RUN rm ./target/x86_64-unknown-linux-musl/release/deps/my-app*
RUN cargo build --release
FROM alpine:latest
ARG APP=/usr/src/app
EXPOSE 8000
ENV TZ=Etc/UTC \
APP_USER=appuser
RUN addgroup -S $APP_USER \
&& adduser -S -g $APP_USER $APP_USER
RUN apk update \
&& apk add --no-cache ca-certificates tzdata \
&& rm -rf /var/cache/apk/*
COPY --from=builder /home/rust/src/my-app/target/x86_64-unknown-linux-musl/release/rust-docker-web ${APP}/my-app
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./my-app"]
cargo.toml
[workspace]
members = [
"abcd",
"efgh"
"ijkl"
]
After adding the package name in config.toml, i am facing error
Caused by:
no targets specified in the manifest
either src/lib.rs, src/main.rs, a [lib] section, or [[bin]] section must be present
The name and version keys are required by Cargo to build your application. Adding those should fix the issue:
[package]
name = "foo"
version = "0.1.0"
[workspace]
members = [
"abcd",
"efgh"
"ijkl"
]
Just a heads up, although the edition key is not required, if not specified, cargo will default to compiling your application with Rust 2015 Edition instead of the newer 2018 Edition. You should probably specify your Rust edition (even if you are using the 2015 Edition), to avoid any confusion:
[package]
edition = "2018"

How do I modify my DOCKERFILE to install wget into kubernetes pod?

Right now my DOCKERFILE builds a dotnet image that is installed/updated and run inside its own pod in a Kubernetes cluster.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
ARG DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
ARG DOTNET_CLI_TELEMETRY_OPTOUT=1
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
ARG DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
ARG DOTNET_CLI_TELEMETRY_OPTOUT=1
ARG ArtifactPAT
WORKDIR /src
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
COPY /src .
RUN dotnet restore "./sourceCode.csproj" -s "https://api.nuget.org/v3/index.json"
RUN dotnet build "./sourceCode.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "./sourceCode.csproj" -c
Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "SourceCode.dll"]
EXPOSE 80
The cluster is very bare-bones and does not include either curl nor wget on it. So, I need to get wget or curl installed in the pod/cluster to execute scripted commands that are set to run automatically after deployment and startup are completed. The command to do the install:
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
within the DOCKERFILE seems to do nothing to install in the Kubernetes cluster. As after the build run and deploys if I were to exec into the pod and try to run
wget --help
I get wget doesn't exist. I do not have a lot of experience build DOCKERFILEs so I am truely getting stumped. And I want this automated in the DOCKERFILE as I will not be able to log into environments above our Test to perform the install manually.
its not related to kubernetes nor pods. Actually you cant install anything to kubernetes pod. you can install packages to containers which runs on pod.
Your problem is that, you install wget to your build image. when you use this image below you lost all installed packages. because those packages belong to build image. build, base, final they are different images.you need to copy files explicitly like you did final image. like this
COPY --from=publish /app .
so add command in the below to your final image and you can use wget without no problem.
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
see this link for more info && best practices.
https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/
Everything between:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
ARG DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
ARG DOTNET_CLI_TELEMETRY_OPTOUT=1
WORKDIR /app
and:
FROM base AS final
is irrelevant. With that line, you start constructing a new image from base which was defined in the first block.
(Incidentally, on the next line, you duplicate the WORKDIR statement needlessly. Also, final is the name you'll use to refer to base, it isn't a name for this finally defined image, so that doesn't really make sense - you don't want to do e.g. COPY --from=final.)
You need to install wget in either the base image, or in the last defined image which you'll actually be running, at the end.

Monolith docker application with webpack

I am running my monolith application in a docker container and k8s on GKE.
The application contains python & node dependencies also webpack for front end bundle.
We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.
Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.
Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.
To reduce time i tried using the Kaniko builder.
Issue :
As docker cache layers for python code it's working perfectly. But when there is any changes in JS or CSS file we have to generate bundle.
When there is any changes in JS & CSS file instead if generate new bundle its use caching layer.
Is there any way to separate out build new bundle or use cache by passing some value to docker file.
Here is my docker file :
FROM python:3.5 AS python-build
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
ADD . /app
FROM node:10-alpine AS node-build
WORKDIR /app
COPY --from=python-build ./app/app/static/package.json app/static/
COPY --from=python-build ./app ./
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass && npm run sass && npm run build
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /app
COPY --from=node-build ./app ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent.
This will incredibly improve the speed, reducing the time of access to npm and pip registries.
Use a private docker registry (the official one or something like VMWare harbor or SonaType Nexus OSS).
You store those build images on your registry and use them whenever something on the project changes.
Something like this:
First Docker Builder // python-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t python-builder:YOUR_TAG -f Dockerfile.python.build .
FROM python:3.5
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
Second Docker Builder // js-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t js-builder:YOUR_TAG -f Dockerfile.js.build .
FROM node:10-alpine
WORKDIR /app
COPY app/static/package.json /app/app/static
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass
Your Application Multi-stage build:
docker build --no-cache -t app_delivery:YOUR_TAG -f Dockerfile.app .
FROM python-builder:YOUR_TAG as python-build
# Nothing, already "stoned" in another build process
FROM js-builder:YOUR_TAG AS node-build
ADD ##### YOUR JS/CSS files only here, required from npm! ###
RUN npm run sass && npm run build
FROM python:3.5-slim
COPY . /app # your original clean app
COPY --from=python-build #### only the files installed with the pip command
WORKDIR /app
COPY --from=node-build ##### Only the generated files from npm here! ###
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
A question is: why do you install curl and execute again the pip install -r requirements.txt command in the final docker image?
Triggering every time an apt-get update and install without cleaning the apt cache /var/cache/apt folder produces a bigger image.
As suggestion, use the docker build command with the option --no-cache to avoid caching result:
docker build --no-cache -t your_image:your_tag -f your_dockerfile .
Remarks:
You'll have 3 separate Dockerfiles, as I listed above.
Build the Docker images 1 and 2 only if you change your python-pip and node-npm requirements, otherwise keep them fixed for your project.
If any dependency requirement changes, then update the docker image involved and then the multistage one to point to the latest built image.
You should always build only the source code of your project (CSS, JS, python). In this way, you have also guaranteed reproducible builds.
To optimize your environment and copy files across the multi-stage builders, try to use virtualenv for python build.

Resources