Backstage: build docker image - docker

I am trying to build a docker image of backstage code (https://github.com/backstage/backstage)
Also, my dockerfile is the same as what is documented here: https://backstage.io/docs/deployment/docker#multi-stage-build
However, whenever I try to build a docker image it complains of an error:
#19 [build 10/10] RUN yarn build
#19 sha256:230493e99704b51c04c3dfbb54c48c05c41decce3b8cac9992d7ce37b5211ea8
#19 1.208 yarn run v1.22.1
#19 1.257 $ backstage-cli repo build --all
#19 1.272 /bin/sh: 1: backstage-cli: not found
#19 1.287 error Command failed with exit code 127.
#19 1.287 info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
#19 ERROR: executor failed running [/bin/sh -c yarn build]: exit code: 127
Could someone please help with what I am missing here while building the docker image?
Thank-you!

Can you try below dockerfile:
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
COPY catalog-info.yaml ./
#COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
WORKDIR /app
COPY --from=packages /app .
# install sqlite3 dependencies
#RUN apt-get update && \
# apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
# yarn config set python /usr/bin/python3
RUN yarn install --frozen-lockfile --network-timeout 600000 && rm -rf "$(yarn cache dir)"
COPY . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
#RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
WORKDIR /app
# install sqlite3 dependencies, you can skip this if you don't use sqlite3 in the image
#RUN apt-get update && \
# apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
# rm -rf /var/lib/apt/lists/* && \
# yarn config set python /usr/bin/python3
# Copy the install dependencies from the build stage and context
COPY --from=build /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
RUN yarn install --frozen-lockfile --production --network-timeout 600000 && rm -rf "$(yarn cache dir)"
# Copy the built packages from the build stage
COPY --from=build /app/packages/backend/dist/bundle.tar.gz .
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
# Copy any other files that we need at runtime
COPY app-config.yaml ./
#COPY github-app-proficloud-backstage-app-credentials.yaml ./
#This is for Tech-Docs
#RUN apt-get update && apt-get install -y python3 python3-pip
#RUN pip3 install mkdocs-techdocs-core==1.0.1
#This is enable for software templating to work
#RUN pip3 install cookiecutter
CMD ["node", "packages/backend", "--config", "app-config.yaml"]

Related

Using DOCKER_BUILDKIT=1 in Cloud Build from GCP

My problem is that I am trying to use Cloud Build with a repository containing a Dockerfile that needs to use the Docker BuildKit to be generated correctly, but Cloud Build does not allow its use, here the code:
# Stage 1 - Create yarn install skeleton layer
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
# Comment this out if you don't have any internal plugins
COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
# install sqlite3 dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
yarn config set python /usr/bin/python3
USER node
WORKDIR /app
COPY --from=packages --chown=node:node /app .
# Stop cypress from downloading it's massive binary.
ENV CYPRESS_INSTALL_BINARY=0
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --network-timeout 600000
COPY --chown=node:node . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
# RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \
&& tar xzf packages/backend/dist/skeleton.tar.gz -C packages/backend/dist/skeleton \
&& tar xzf packages/backend/dist/bundle.tar.gz -C packages/backend/dist/bundle
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev wget python3 build-essential && \
yarn config set python /usr/bin/python3
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip3 install mkdocs-techdocs-core==1.0.1
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# Copy the install dependencies from the build stage and context
COPY --from=build --chown=node:node /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton/ ./
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --production --network-timeout 600000
# Copy the built packages from the build stage
COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./
# Copy any other files that we need at runtime
COPY --chown=node:node app-config.yaml ./
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
ADD start.sh credentials.json ./
COPY catalog ./
CMD ["./start.sh"]
I tried to modify the code using artificial intelligence as I'm not very good at docker but it didn't work.
I think the buildkit --mount options are mainly there for performance and you should be able to remove them and still be able to build.
Try
# Stage 1 - Create yarn install skeleton layer
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
# Comment this out if you don't have any internal plugins
COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
# install sqlite3 dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
yarn config set python /usr/bin/python3
USER node
WORKDIR /app
COPY --from=packages --chown=node:node /app .
# Stop cypress from downloading it's massive binary.
ENV CYPRESS_INSTALL_BINARY=0
RUN yarn install --frozen-lockfile --network-timeout 600000
COPY --chown=node:node . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
# RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \
&& tar xzf packages/backend/dist/skeleton.tar.gz -C packages/backend/dist/skeleton \
&& tar xzf packages/backend/dist/bundle.tar.gz -C packages/backend/dist/bundle
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev wget python3 build-essential && \
yarn config set python /usr/bin/python3
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip3 install mkdocs-techdocs-core==1.0.1
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# Copy the install dependencies from the build stage and context
COPY --from=build --chown=node:node /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton/ ./
RUN yarn install --frozen-lockfile --production --network-timeout 600000
# Copy the built packages from the build stage
COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./
# Copy any other files that we need at runtime
COPY --chown=node:node app-config.yaml ./
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
ADD start.sh credentials.json ./
COPY catalog ./
CMD ["./start.sh"]

Docker container exit with error code error libcurl not found

I am building a container, you can see the docker file, its for rust app deployment on Argonaut. but its not able to start. Here you can see the Dockerfile.
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
FROM debian:buster-slim
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000
It builds successfully but when it works it gets exit with error code 127.
linkedin-leadr-1 | /app/target/release/linkedin: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory
Have not found what's wrong with it, even though I am installing libcurl4. but my docker container is not able to find it. Can you please give me the solution?
As you install libcurl4 in your build environment but not in your execution environment, that's most likely the reason.
There are two ways to solve this:
Install libcurl4 in your final image, or
Link statically by replacing cargo build with
RUN rustup target add x86_64-unknown-linux-musl
RUN cargo build --target=x86_64-unknown-linux-musl --release
The --release flag should get added either way, as I'm sure you don't want to deliver unoptimized debug builds to your enduser ;)
Note that if you choose to install libcurl4 in your final image, you need to clean up the apt cache afterwards, otherwise your image grows immensely:
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
The full Dockerfile with libcurl4 installed would then look like this:
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
# Copy the libcurl shared library from the builder stage into the final container
RUN mkdir -p /usr/local/lib && \
cp /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/local/lib && \
ln -s /usr/local/lib/libcurl.so.4 /usr/local/lib/libcurl.so
FROM debian:buster-slim
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000

version `GLIBC_2.29' not found

I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]

pip install does not work from dockerfile

I am getting following error while trying to run pip.
Could not open requirements file: [Errno 2] No such file or directory: '/home/elasticsearch/text-embeddings/requirements.txt'
The command '/bin/sh -c pip3.6 install -r /home/elasticsearch/text-embeddings/requirements.txt' returned a non-zero code: 1
My dockerfile looks like this...
FROM elasticsearch:7.3.1
RUN yum install -y https://centos7.iuscommunity.org/ius-release.rpm
RUN yum update
RUN yum install -y python36u python36u-libs python36u-devel python36u-pip git
RUN mkdir /home/elasticsearch/
RUN cd /home/elasticsearch/
RUN git clone https://github.com/jtibshirani/text-embeddings.git
WORKDIR /home/elasticsearch/text-embeddings
RUN cd /home/elasticsearch/text-embeddings
RUN pip3.6 install -r /home/elasticsearch/text-embeddings/requirements.txt
CMD ["python3.6", "/home/elasticsearch/text-embeddings/src/main.py"]
I have checked that these commands run successfully on the server if run one command at a time from command-prompt.
Try with the following Dockerfile:
FROM elasticsearch:7.3.1
RUN yum install -y https://centos7.iuscommunity.org/ius-release.rpm
RUN yum update
RUN yum install -y python36u python36u-libs python36u-devel python36u-pip git
RUN mkdir /home/elasticsearch/
WORKDIR /home/elasticsearch/
RUN git clone https://github.com/jtibshirani/text-embeddings.git
RUN pip3.6 install -r /home/elasticsearch/text-embeddings/requirements.txt
CMD ["python3.6", "/home/elasticsearch/text-embeddings/src/main.py"]
The issue with the original Dockerfile is the RUN cd /path. Each build stage executes in a separate container thus cd 'ing to a directory does nothing. For changing active directory during build use WORKDIR instruction.
the file requirements.txt is in /usr/share/elasticsearch/text-embeddings not /home/elasticsearch/text-embeddings
this will work:
FROM elasticsearch:7.3.1
RUN yum install -y https://centos7.iuscommunity.org/ius-release.rpm
RUN yum update
RUN yum install -y python36u python36u-libs python36u-devel python36u-pip git
RUN git clone https://github.com/jtibshirani/text-embeddings.git
WORKDIR /usr/share/elasticsearch/text-embeddings
RUN pip3.6 install -r /usr/share/elasticsearch/text-embeddings/requirements.txt
CMD ["python3.6", "/usr/share/elasticsearch/text-embeddings/src/main.py"]
The issue is due to a combination of a couple of the answers here #leopal is correct whereas the mkdir and cd are run in different layers and don't result in what you're expecting ref. this answer.
FROM elasticsearch:7.3.1
RUN yum install -y https://centos7.iuscommunity.org/ius-release.rpm
RUN yum update
RUN yum install -y python36u python36u-libs python36u-devel python36u-pip git
RUN mkdir /home/elasticsearch/
RUN cd /home/elasticsearch/
RUN git clone https://github.com/jtibshirani/text-embeddings.git
ENTRYPOINT ["bash"]
... running the container (i.e. docker build -t so:57689606 . && docker run --rm -it so:57689606) will drop you in a shell in the /usr/share/elasticsearch directory with all the files present as pointed out by #LinPy here. Adding the WORKDIR after your checkout is moving you to a directory where the repository wasn't cloned (e.g. /home/elasticsearch).
FROM elasticsearch:7.3.1
RUN yum install -y https://centos7.iuscommunity.org/ius-release.rpm
RUN yum update
RUN yum install -y python36u python36u-libs python36u-devel python36u-pip git
RUN mkdir /home/elasticsearch/
RUN cd /home/elasticsearch/
RUN git clone https://github.com/jtibshirani/text-embeddings.git
WORKDIR /home/elasticsearch/text-embeddings
ENTRYPOINT ["bash"]
... will drop you in a shell inside an empty folder when you run the container (hence the [Errno 2] No such file or directory error).
Also, specifying a WORKDIR creates the directory if it doesn't already exist, e.g. your RUN mkdir /home/elasticsearch/ and RUN cd /home/elasticsearch instructions don't work as you'd expect and are merely adding useless layers to your final image. Functional Dockerfile:
FROM elasticsearch:7.3.1
RUN yum install -y https://centos7.iuscommunity.org/ius-release.rpm
RUN yum update
RUN yum install -y python36u python36u-libs python36u-devel python36u-pip git
WORKDIR /home/elasticsearch/
RUN git clone https://github.com/jtibshirani/text-embeddings.git
RUN python3.6 -m pip install -r /home/elasticsearch/text-embeddings/requirements.txt
CMD ["python3.6", "/home/elasticsearch/text-embeddings/src/main.py"]
Finally, removing the unnecessary layers in your final image (optimized Dockerfile):
FROM elasticsearch:7.3.1
RUN yum install -y https://centos7.iuscommunity.org/ius-release.rpm
RUN yum update && \
yum install -y \
python36u \
python36u-libs \
python36u-devel \
python36u-pip \
git && \
yum clean all
WORKDIR /home/elasticsearch/
RUN git clone https://github.com/jtibshirani/text-embeddings.git && \
python3.6 -m pip install -r /home/elasticsearch/text-embeddings/requirements.txt
ENTRYPOINT ["python3.6"]
CMD ["/home/elasticsearch/text-embeddings/src/main.py"]
Note: the apt packages are purposely split on multiple lines, makes it easier to see at a glance what changes in a git diff imo.
Try this command in Dockerfile and try it.
RUN pip install --trusted-host pypi.python.org -r requirements.txt

I can't install specific version (1.0.2g) of openssl in docker

I want to install openssl version 1.0.2g in docker image so I wrote Dockerfile:
RUN apt-get update
RUN apt-get install -y build-essential cmake zlib1g-dev libcppunit-dev git subversion && rm -rf /var/lib/apt/lists/*
RUN wget https://www.openssl.org/source/openssl-1.0.2g.tar.gz -O - | tar -xz
WORKDIR /openssl_1.0.2g
RUN ./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl
and tried to build it:
Removing intermediate container 0666b2c5021f
---> e92f7ed1e3a0
Step 11/14 : WORKDIR /openssl_1.0.2g
Removing intermediate container c8e083d9a453
---> 112f18273e8f
Step 12/14 : RUN ./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl
---> Running in 4871c00e5c35
/bin/sh: 1: ./config: not found
The command '/bin/sh -c ./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl' returned a non-zero code: 127
but it doesn't work...
How can I fix it?
What base image do you use to build an image?
It works pretty fine with ubuntu:16.04 base image and the same Dockerfile you provided:
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y build-essential cmake zlib1g-dev libcppunit-dev git subversion wget && rm -rf /var/lib/apt/lists/*
RUN wget https://www.openssl.org/source/openssl-1.0.2g.tar.gz -O - | tar -xz
WORKDIR /openssl-1.0.2g
RUN ./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl && make && make install

Resources