My problem is that I am trying to use Cloud Build with a repository containing a Dockerfile that needs to use the Docker BuildKit to be generated correctly, but Cloud Build does not allow its use, here the code:
# Stage 1 - Create yarn install skeleton layer
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
# Comment this out if you don't have any internal plugins
COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
# install sqlite3 dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
yarn config set python /usr/bin/python3
USER node
WORKDIR /app
COPY --from=packages --chown=node:node /app .
# Stop cypress from downloading it's massive binary.
ENV CYPRESS_INSTALL_BINARY=0
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --network-timeout 600000
COPY --chown=node:node . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
# RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \
&& tar xzf packages/backend/dist/skeleton.tar.gz -C packages/backend/dist/skeleton \
&& tar xzf packages/backend/dist/bundle.tar.gz -C packages/backend/dist/bundle
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev wget python3 build-essential && \
yarn config set python /usr/bin/python3
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip3 install mkdocs-techdocs-core==1.0.1
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# Copy the install dependencies from the build stage and context
COPY --from=build --chown=node:node /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton/ ./
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --production --network-timeout 600000
# Copy the built packages from the build stage
COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./
# Copy any other files that we need at runtime
COPY --chown=node:node app-config.yaml ./
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
ADD start.sh credentials.json ./
COPY catalog ./
CMD ["./start.sh"]
I tried to modify the code using artificial intelligence as I'm not very good at docker but it didn't work.
I think the buildkit --mount options are mainly there for performance and you should be able to remove them and still be able to build.
Try
# Stage 1 - Create yarn install skeleton layer
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
# Comment this out if you don't have any internal plugins
COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
# install sqlite3 dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
yarn config set python /usr/bin/python3
USER node
WORKDIR /app
COPY --from=packages --chown=node:node /app .
# Stop cypress from downloading it's massive binary.
ENV CYPRESS_INSTALL_BINARY=0
RUN yarn install --frozen-lockfile --network-timeout 600000
COPY --chown=node:node . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
# RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \
&& tar xzf packages/backend/dist/skeleton.tar.gz -C packages/backend/dist/skeleton \
&& tar xzf packages/backend/dist/bundle.tar.gz -C packages/backend/dist/bundle
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev wget python3 build-essential && \
yarn config set python /usr/bin/python3
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip3 install mkdocs-techdocs-core==1.0.1
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# Copy the install dependencies from the build stage and context
COPY --from=build --chown=node:node /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton/ ./
RUN yarn install --frozen-lockfile --production --network-timeout 600000
# Copy the built packages from the build stage
COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./
# Copy any other files that we need at runtime
COPY --chown=node:node app-config.yaml ./
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
ADD start.sh credentials.json ./
COPY catalog ./
CMD ["./start.sh"]
Related
I'm currently having an issue when I tried to run pip as a non-root user in my Dockerfile. When It starts to build the wheel it gives these warnings WARNING: The script foo is installed in '/home/myuser/.local/bin' which is not on PATH. for pep8, sqlformat, isort, gunicorn, django-admin, epylint, pylint, pyreverse, and symilar.
In the current code I have, the build fails with the error COPY failed: stat usr/src/app/wheels: file does not exist when it tries to run COPY --from=builder /usr/src/app/wheels /wheels although it was failing earlier before I took out --wheel-dir /usr/src/app/wheels from line 35.
###########
# BUILDER #
###########
# pull official base image
FROM python:3.9.4-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# lint
RUN pip install --upgrade pip
RUN pip install flake8==3.9.2
COPY . .
RUN flake8 --ignore=E501,F401 .
# install dependencies
RUN adduser -D myuser
USER myuser
WORKDIR /home/myuser
COPY --chown=myuser:myuser requirements.txt requirements.txt
RUN pip install --user -r requirements.txt
ENV PATH="/home/myuser/.local/bin:${PATH}"
COPY --chown=myuser:myuser . .
WORKDIR /usr/src/app
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.9.4-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
RUN mkdir $APP_HOME/mediafiles
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache /wheels/*
# copy entrypoint.prod.sh
COPY ./entrypoint.prod.sh .
RUN sed -i 's/\r$//g' $APP_HOME/entrypoint.prod.sh
RUN chmod +x $APP_HOME/entrypoint.prod.sh
# copy project
COPY . $APP_HOME
# copy media files
COPY ./media/ $APP_HOME/mediafiles
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
I've tried tinkering with the WORKDIR, ENV PATH, and RUN pip wheel.
I am building a container, you can see the docker file, its for rust app deployment on Argonaut. but its not able to start. Here you can see the Dockerfile.
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
FROM debian:buster-slim
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000
It builds successfully but when it works it gets exit with error code 127.
linkedin-leadr-1 | /app/target/release/linkedin: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory
Have not found what's wrong with it, even though I am installing libcurl4. but my docker container is not able to find it. Can you please give me the solution?
As you install libcurl4 in your build environment but not in your execution environment, that's most likely the reason.
There are two ways to solve this:
Install libcurl4 in your final image, or
Link statically by replacing cargo build with
RUN rustup target add x86_64-unknown-linux-musl
RUN cargo build --target=x86_64-unknown-linux-musl --release
The --release flag should get added either way, as I'm sure you don't want to deliver unoptimized debug builds to your enduser ;)
Note that if you choose to install libcurl4 in your final image, you need to clean up the apt cache afterwards, otherwise your image grows immensely:
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
The full Dockerfile with libcurl4 installed would then look like this:
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
# Copy the libcurl shared library from the builder stage into the final container
RUN mkdir -p /usr/local/lib && \
cp /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/local/lib && \
ln -s /usr/local/lib/libcurl.so.4 /usr/local/lib/libcurl.so
FROM debian:buster-slim
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000
I am trying to build a docker image of backstage code (https://github.com/backstage/backstage)
Also, my dockerfile is the same as what is documented here: https://backstage.io/docs/deployment/docker#multi-stage-build
However, whenever I try to build a docker image it complains of an error:
#19 [build 10/10] RUN yarn build
#19 sha256:230493e99704b51c04c3dfbb54c48c05c41decce3b8cac9992d7ce37b5211ea8
#19 1.208 yarn run v1.22.1
#19 1.257 $ backstage-cli repo build --all
#19 1.272 /bin/sh: 1: backstage-cli: not found
#19 1.287 error Command failed with exit code 127.
#19 1.287 info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
#19 ERROR: executor failed running [/bin/sh -c yarn build]: exit code: 127
Could someone please help with what I am missing here while building the docker image?
Thank-you!
Can you try below dockerfile:
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
COPY catalog-info.yaml ./
#COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
WORKDIR /app
COPY --from=packages /app .
# install sqlite3 dependencies
#RUN apt-get update && \
# apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
# yarn config set python /usr/bin/python3
RUN yarn install --frozen-lockfile --network-timeout 600000 && rm -rf "$(yarn cache dir)"
COPY . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
#RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
WORKDIR /app
# install sqlite3 dependencies, you can skip this if you don't use sqlite3 in the image
#RUN apt-get update && \
# apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
# rm -rf /var/lib/apt/lists/* && \
# yarn config set python /usr/bin/python3
# Copy the install dependencies from the build stage and context
COPY --from=build /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
RUN yarn install --frozen-lockfile --production --network-timeout 600000 && rm -rf "$(yarn cache dir)"
# Copy the built packages from the build stage
COPY --from=build /app/packages/backend/dist/bundle.tar.gz .
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
# Copy any other files that we need at runtime
COPY app-config.yaml ./
#COPY github-app-proficloud-backstage-app-credentials.yaml ./
#This is for Tech-Docs
#RUN apt-get update && apt-get install -y python3 python3-pip
#RUN pip3 install mkdocs-techdocs-core==1.0.1
#This is enable for software templating to work
#RUN pip3 install cookiecutter
CMD ["node", "packages/backend", "--config", "app-config.yaml"]
I've been trying to build my Docker image:
docker build -t <tag> -f Dockerfile.production .
However, this hangs while bundling.
I have tried bundling with:
DEBUG_RESOLVER=true bundle install --verbose
Running bundle on my host machine works fine - only the Docker image has this problem.
Attached is my Dockerfile:
FROM cimg/ruby:2.7.4-node
LABEL maintainer=budgeneration#gmail.com
SHELL ["/bin/bash", "-c"]
USER root
RUN sudo apt-get update && \
apt-get install -y nodejs npm libvips-tools libsodium-dev \
apt-transport-https ca-certificates curl software-properties-common \
librocksdb-dev \
libsnappy-dev \
python3-distutils \
rsyslog --no-install-recommends
# Other tools not related to building by still required.
RUN sudo apt-get install -y ffmpeg gifsicle
USER circleci
# Install all gems first.
# This hits the warm cache if unchanged so bundling is faster.
COPY Gemfile* /tmp/
WORKDIR /tmp
RUN gem install sassc-rails -v 2.1.2
RUN gem install bulma-rails -v 0.9.1
RUN bundle config set without 'development test' \
&& bundle install --verbose \
&& bundle binstubs railties
# Remove yarn (the other yarn)
RUN apt-get purge cmdtest
RUN yarn global add mjml
WORKDIR /sapco
# First we copy just Yarn files, to run yarn install
COPY package.json /sapco
COPY yarn.lock /sapco
RUN yarn install
WORKDIR /sapco
# Now copy everything
COPY . /sapco
EXPOSE 3000
Any tips to try to debug this further?
I have managed to resolve this issue but not sure why:
I changed my base image file to FROM circleci/ruby:2.7.4-buster-node and the bundling step continues fine.
I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]