Create image using docker for rust application - docker

I have created the rust application, i would like to dockerize my application. Below is my Dockerfile code, it is from reference. I am having trouble in creating image, I am getting error as below mention.In my local my-app project folder, I have cargo.toml which does not contain any package name its contains only work-space below is reference. Please help on this.
error: failed to read /home/rust/src/my-app/config/Cargo.toml
FROM ekidd/rust-musl-builder:stable as builder
RUN USER=root cargo new --bin my-app
WORKDIR ./my-app
COPY ./Cargo.lock ./Cargo.lock
COPY ./Cargo.toml ./Cargo.toml
RUN cargo build --release
RUN rm src/*.rs
ADD . ./
RUN rm ./target/x86_64-unknown-linux-musl/release/deps/my-app*
RUN cargo build --release
FROM alpine:latest
ARG APP=/usr/src/app
EXPOSE 8000
ENV TZ=Etc/UTC \
APP_USER=appuser
RUN addgroup -S $APP_USER \
&& adduser -S -g $APP_USER $APP_USER
RUN apk update \
&& apk add --no-cache ca-certificates tzdata \
&& rm -rf /var/cache/apk/*
COPY --from=builder /home/rust/src/my-app/target/x86_64-unknown-linux-musl/release/rust-docker-web ${APP}/my-app
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./my-app"]
cargo.toml
[workspace]
members = [
"abcd",
"efgh"
"ijkl"
]
After adding the package name in config.toml, i am facing error
Caused by:
no targets specified in the manifest
either src/lib.rs, src/main.rs, a [lib] section, or [[bin]] section must be present

The name and version keys are required by Cargo to build your application. Adding those should fix the issue:
[package]
name = "foo"
version = "0.1.0"
[workspace]
members = [
"abcd",
"efgh"
"ijkl"
]
Just a heads up, although the edition key is not required, if not specified, cargo will default to compiling your application with Rust 2015 Edition instead of the newer 2018 Edition. You should probably specify your Rust edition (even if you are using the 2015 Edition), to avoid any confusion:
[package]
edition = "2018"

Related

A Dockerfile with 2 ENTRYPOINT

I am learning about docker, specificially how to write docker file. Recently I came across this one and couldn't understand why there are 2 ENTRYPOINT in it.
The original file is in this link CosmWasm/rust-optimizer Dockerfile. The code bellow is its current actual content.
FROM rust:1.60.0-alpine as targetarch
ARG BUILDPLATFORM
ARG TARGETPLATFORM
ARG TARGETARCH
ARG BINARYEN_VERSION="version_105"
RUN echo "Running on $BUILDPLATFORM, building for $TARGETPLATFORM"
# AMD64
FROM targetarch as builder-amd64
ARG ARCH="x86_64"
# ARM64
FROM targetarch as builder-arm64
ARG ARCH="aarch64"
# GENERIC
FROM builder-${TARGETARCH} as builder
# Download binaryen sources
ADD https://github.com/WebAssembly/binaryen/archive/refs/tags/$BINARYEN_VERSION.tar.gz /tmp/binaryen.tar.gz
# Extract and compile wasm-opt
# Adapted from https://github.com/WebAssembly/binaryen/blob/main/.github/workflows/build_release.yml
RUN apk update && apk add build-base cmake git python3 clang ninja
RUN tar -xf /tmp/binaryen.tar.gz
RUN cd binaryen-version_*/ && cmake . -G Ninja -DCMAKE_CXX_FLAGS="-static" -DCMAKE_C_FLAGS="-static" -DCMAKE_BUILD_TYPE=Release -DBUILD_STATIC_LIB=ON && ninja wasm-opt
# Run tests
RUN cd binaryen-version_*/ && ninja wasm-as wasm-dis
RUN cd binaryen-version_*/ && python3 check.py wasm-opt
# Install wasm-opt
RUN strip binaryen-version_*/bin/wasm-opt
RUN mv binaryen-version_*/bin/wasm-opt /usr/local/bin
# Check cargo version
RUN cargo --version
# Check wasm-opt version
RUN wasm-opt --version
# Download sccache and verify checksum
ADD https://github.com/mozilla/sccache/releases/download/v0.2.15/sccache-v0.2.15-$ARCH-unknown-linux-musl.tar.gz /tmp/sccache.tar.gz
RUN sha256sum /tmp/sccache.tar.gz | egrep '(e5d03a9aa3b9fac7e490391bbe22d4f42c840d31ef9eaf127a03101930cbb7ca|90d91d21a767e3f558196dbd52395f6475c08de5c4951a4c8049575fa6894489)'
# Extract and install sccache
RUN tar -xf /tmp/sccache.tar.gz
RUN mv sccache-v*/sccache /usr/local/bin/sccache
RUN chmod +x /usr/local/bin/sccache
# Check sccache version
RUN sccache --version
# Add scripts
ADD optimize.sh /usr/local/bin/optimize.sh
RUN chmod +x /usr/local/bin/optimize.sh
ADD optimize_workspace.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/optimize_workspace.sh
# Being required for gcc linking of build_workspace
RUN apk add --no-cache musl-dev
ADD build_workspace build_workspace
RUN cd build_workspace && \
echo "Installed targets:" && (rustup target list | grep installed) && \
export DEFAULT_TARGET="$(rustc -vV | grep 'host:' | cut -d' ' -f2)" && echo "Default target: $DEFAULT_TARGET" && \
# Those RUSTFLAGS reduce binary size from 4MB to 600 KB
RUSTFLAGS='-C link-arg=-s' cargo build --release && \
ls -lh target/release/build_workspace && \
(ldd target/release/build_workspace || true) && \
mv target/release/build_workspace /usr/local/bin
#
# base-optimizer
#
FROM rust:1.60.0-alpine as base-optimizer
# Being required for gcc linking
RUN apk update && \
apk add --no-cache musl-dev
# Setup Rust with Wasm support
RUN rustup target add wasm32-unknown-unknown
# Add wasm-opt
COPY --from=builder /usr/local/bin/wasm-opt /usr/local/bin
#
# rust-optimizer
#
FROM base-optimizer as rust-optimizer
# Use sccache. Users can override this variable to disable caching.
COPY --from=builder /usr/local/bin/sccache /usr/local/bin
ENV RUSTC_WRAPPER=sccache
# Assume we mount the source code in /code
WORKDIR /code
# Add script as entry point
COPY --from=builder /usr/local/bin/optimize.sh /usr/local/bin
ENTRYPOINT ["optimize.sh"]
# Default argument when none is provided
CMD ["."]
#
# workspace-optimizer
#
FROM base-optimizer as workspace-optimizer
# Assume we mount the source code in /code
WORKDIR /code
# Add script as entry point
COPY --from=builder /usr/local/bin/optimize_workspace.sh /usr/local/bin
COPY --from=builder /usr/local/bin/build_workspace /usr/local/bin
ENTRYPOINT ["optimize_workspace.sh"]
# Default argument when none is provided
CMD ["."]
According to this Document, only the last ENTRYPOINT will have effect. But those 2 are in 2 different base docker images, so in any special case, will those 2 ENTRYPOINT have effect or this is just a bug?
You can keep replacing the entry point down the file, however, that's a multi-stage docker file. so if you build a given stage then you'll get a different entry point.
For example:
docker build --target rust-optimizer .
will build up to and including that stage which when run will run optimize.sh .
however
docker build .
which when run will run optimize_workspace.sh .

CGO_Enabled=1 required building Go binary using SQLite in Alpine Docker container

I am trying to compile an Alpine Go container which uses GORM and it's SQLite driver for an in-memory database. This depends on CGO being enabled. My binary builds and executes fine using go build ., but when running my docker image (docker build . followed by docker run $imagename) I get the error message:
standard_init_linux.go:219: exec user process caused: no such file or directory
I am building on Windows 10 64 bit. I have gcc installed. Both C:\TDM-GCC-64\bin and C:\cygwin64\bin are in my $env:path. I have changed the line endings to Linux style (LF) for all files in the package.
My docker file is as follows:
FROM golang:1.16.4-alpine AS builder
RUN apk update \
&& apk add --no-cache git \
&& apk add --no-cache ca-certificates \
&& apk add --update gcc musl-dev \
&& update-ca-certificates
# Create a user so that the image doens't run as root
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/sbin/nologin" \
--no-create-home \
--uid "100001" \
"appuser"
# Set the working directory inside the container.
WORKDIR $GOPATH/src/app
# Copy all files from the current directory to the working directory
COPY . .
# Fetch dependencies.
RUN go get -u -d -v
# Go build the binary, specifying the final OS and architecture we're looking for
RUN GOOS=linux CGO_ENABLED=1 GOARCH=amd64 go build -ldflags="-w -s" -o /go/bin/app -tags timetzdata
FROM scratch
# Import the user and group files from the builder.
COPY --from=builder /etc/passwd /etc/passwd
COPY --from=builder /etc/group /etc/group
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Copy our static executable.
COPY --from=builder /go/bin/app/go/bin/app
# Use the user that we've just created, one that isn't root
USER appuser:appuser
ENTRYPOINT ["/go/bin/app"]
Can you please help me understand why my docker image won't run?
In case it's of any value, the line of Go code to open the DB is like so. As this works using Go build locally or using go run . I don't think this is relevant however.
db, err := gorm.Open(sqlite.Open("file::memory:?cache=shared"), &gorm.Config{})
Libc not exist in scratch. I changed it to alpine and it solved this error.
Solution was to install both gcc and alpine-sdk packages using apk in the Dockerfile. Only gcc was being installed in the question above.

version `GLIBC_2.29' not found

I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]

Lambda docker image won't start if i overwrite entrypoint from lambda console

I have this Dockerfile
ARG FUNCTION_DIR="/opt/"
FROM node:10.13-alpine#sha256:22c8219b21f86dfd7398ce1f62c48a022fecdcf0ad7bf3b0681131bd04a023a2 AS BUILD_IMAGE
ARG FUNCTION_DIR
RUN apk --update add cmake autoconf automake libtool binutils libexecinfo-dev python2 gcc make g++ zlib-dev
ENV NODE_ENV=production
ENV PYTHON=/usr/bin/python2
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY package.json yarn.lock ./
RUN yarn --frozen-lockfile
RUN npm prune --production
RUN yarn cache clean
RUN npm cache clean --force
FROM node:10.13-alpine#sha256:22c8219b21f86dfd7398ce1f62c48a022fecdcf0ad7bf3b0681131bd04a023a2
ARG FUNCTION_DIR
ENV NODE_ENV=production
ENV NODE_OPTIONS=--max_old_space_size=4096
RUN apk update \
&& apk upgrade \
&& apk add mongodb-tools fontconfig dumb-init \
&& rm -rf /var/cache/apk/*
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY --from=BUILD_IMAGE ${FUNCTION_DIR}/node_modules ./node_modules
COPY . .
RUN if [ -f core/config/local.js ]; then rm core/config/local.js; fi
RUN cp core/config/local.js.aws.readonly core/config/local.js
USER node
EXPOSE 8080
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["node", "app.js", "--app=search", "--env=production"]
I use this Dockerfile to generate an image (called core-a) that run our application in K8s. I've added some code inside my application to handle the case our application is launched from a lambda function and i've created another Dockerfile like the one above but using custom ENTRYPOINT and CMD setting this values.
ENTRYPOINT [ "/usr/local/bin/npx", "aws-lambda-ric" ]
CMD [ "apps/search/index.handler" ]
Than i deployed this image called core-b to ecr using core-b as docker image for a lambda function and everything works as expected.
After that i thought that i can use the possibility to overwrite entrypoint and CMD in order to use the same docker image for both environments, so i modified Lambda function's image pointing to core-a and using the entrypoint and cmd values i used in core-b dockerfile, but doing so i get an error
Couldn't find valid bootstrap(s): [\"/usr/local/bin/npx\"]
Did anyone have any suggestion ?
Try to remove the quotation marks (" ") when entering the override value in this web form.
These AWS docs unfortunately have an uncorrect note that say to use the quotation marks on each string.

Docker node js issues: sudo not found

I am having issues with building my docker file with Azure DevOps.
Here is a copy of my docker file:
FROM node:10-alpine
# Create app directory
WORKDIR /usr/src/app
# Copy app
COPY . .
# install packages
RUN apk --no-cache --virtual build-dependencies add \
git \
python \
make \
g++ \
&& sudo npm#latest -g wait-on concurrently truffle \
&& npm install \
&& apk del build-dependencies \
&& truffle compile --all
# Expose the right ports, the commands below are irrelevant when using a docker-compose file.
EXPOSE 3000
CMD ["npm", "run", "server”]
it was working recently now I am getting the following error message:
sudo not found.
What is the cause of this sudo not found error?
Don't use sudo. Just drop that from the command. That image is already running as root by default - there's no reason for it.
TJs-MacBook-Pro:~ tj$ docker run node:10-alpine whoami
root

Resources