I get the following error
Error loading shared library libpython3.10.so.1.0: No such file or
directory (needed by /usr/bin/aws) Error relocating /usr/bin/aws:
Py_BytesMain: symbol not found
when I'm trying to run the docker image.
this is the dockerfile -
FROM node:16.17.1-alpine
RUN apk update && apk add git openssh-client vim python3 py3-pip jq
RUN pip install awscli
RUN apk --purge -v del py-pip
RUN apk add --no-cache yarn
RUN rm /var/cache/apk/*
WORKDIR /app
COPY package*.json ./
COPY yarn.lock ./
yarn install --frozen-lockfile
COPY . .
RUN yarn build
EXPOSE 3000
CMD ["sh", "startup.sh"]
Please advise how I can resolve this error?
The error came from one sh script which was called in CMD ["sh", "startup.sh"]. The script was outdated and was there by mistake, which caused the error.
Related
My problem is that I am trying to use Cloud Build with a repository containing a Dockerfile that needs to use the Docker BuildKit to be generated correctly, but Cloud Build does not allow its use, here the code:
# Stage 1 - Create yarn install skeleton layer
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
# Comment this out if you don't have any internal plugins
COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
# install sqlite3 dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
yarn config set python /usr/bin/python3
USER node
WORKDIR /app
COPY --from=packages --chown=node:node /app .
# Stop cypress from downloading it's massive binary.
ENV CYPRESS_INSTALL_BINARY=0
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --network-timeout 600000
COPY --chown=node:node . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
# RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \
&& tar xzf packages/backend/dist/skeleton.tar.gz -C packages/backend/dist/skeleton \
&& tar xzf packages/backend/dist/bundle.tar.gz -C packages/backend/dist/bundle
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev wget python3 build-essential && \
yarn config set python /usr/bin/python3
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip3 install mkdocs-techdocs-core==1.0.1
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# Copy the install dependencies from the build stage and context
COPY --from=build --chown=node:node /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton/ ./
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --production --network-timeout 600000
# Copy the built packages from the build stage
COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./
# Copy any other files that we need at runtime
COPY --chown=node:node app-config.yaml ./
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
ADD start.sh credentials.json ./
COPY catalog ./
CMD ["./start.sh"]
I tried to modify the code using artificial intelligence as I'm not very good at docker but it didn't work.
I think the buildkit --mount options are mainly there for performance and you should be able to remove them and still be able to build.
Try
# Stage 1 - Create yarn install skeleton layer
FROM node:16-bullseye-slim AS packages
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages packages
# Comment this out if you don't have any internal plugins
COPY plugins plugins
RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 -exec rm -rf {} \+
# Stage 2 - Install dependencies and build packages
FROM node:16-bullseye-slim AS build
# install sqlite3 dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
yarn config set python /usr/bin/python3
USER node
WORKDIR /app
COPY --from=packages --chown=node:node /app .
# Stop cypress from downloading it's massive binary.
ENV CYPRESS_INSTALL_BINARY=0
RUN yarn install --frozen-lockfile --network-timeout 600000
COPY --chown=node:node . .
RUN yarn tsc
RUN yarn --cwd packages/backend build
# If you have not yet migrated to package roles, use the following command instead:
# RUN yarn --cwd packages/backend backstage-cli backend:bundle --build-dependencies
RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \
&& tar xzf packages/backend/dist/skeleton.tar.gz -C packages/backend/dist/skeleton \
&& tar xzf packages/backend/dist/bundle.tar.gz -C packages/backend/dist/bundle
# Stage 3 - Build the actual backend image and install production dependencies
FROM node:16-bullseye-slim
# Install sqlite3 dependencies. You can skip this if you don't use sqlite3 in the image,
# in which case you should also move better-sqlite3 to "devDependencies" in package.json.
RUN apt-get update && \
apt-get install -y --no-install-recommends libsqlite3-dev wget python3 build-essential && \
yarn config set python /usr/bin/python3
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip3 install mkdocs-techdocs-core==1.0.1
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will fail: `can't create directory 'packages/': Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`) so the app dir is correctly created as `node`.
WORKDIR /app
# Copy the install dependencies from the build stage and context
COPY --from=build --chown=node:node /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton/ ./
RUN yarn install --frozen-lockfile --production --network-timeout 600000
# Copy the built packages from the build stage
COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./
# Copy any other files that we need at runtime
COPY --chown=node:node app-config.yaml ./
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
ADD start.sh credentials.json ./
COPY catalog ./
CMD ["./start.sh"]
i have dockerfile with command to run install atop but, i dont know why i am getting error
The command '/bin/bash -o pipefail -c apt install atop' returned a non-zero code: 1
enter image description here
this is my Dockerfile
FROM timbru31/java-node
RUN apt update
RUN apt install atop
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
You need a non-interactive installation. And you can do that on one RUN execution.
RUN apt update && \
apt install -y atop
I have files in the same Directory than a Dockerfile. I am trying to copy those four files to the container, in a directory called ~/.u2net/
This is the Dockerfile code
FROM nvidia/cuda:11.6.0-runtime-ubuntu18.04
ENV DEBIAN_FRONTEND noninteractive
RUN rm /etc/apt/sources.list.d/cuda.list || true
RUN rm /etc/apt/sources.list.d/nvidia-ml.list || true
RUN apt-key del 7fa2af80
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
RUN apt update -y
RUN apt upgrade -y
RUN apt install -y curl software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt install -y python3.9 python3.9-distutils
RUN curl https://bootstrap.pypa.io/get-pip.py | python3.9
WORKDIR /rembg
COPY . .
RUN python3.9 -m pip install .[gpu]
RUN mkdir -p ~/.u2net
COPY u2netp.onnx ~/.u2net/u2netp.onnx
COPY u2net.onnx ~/.u2net/u2net.onnx
COPY u2net_human_seg.onnx ~/.u2net/u2net_human_seg.onnx
COPY u2net_cloth_seg.onnx ~/.u2net/u2net_cloth_seg.onnx
EXPOSE 5000
ENTRYPOINT ["rembg"]
CMD ["s"]
I get the following error
Step 18/24 : COPY u2netp.onnx ~/.u2net/u2netp.onnx COPY failed: file
not found in build context or excluded by .dockerignore: stat
u2netp.onnx: file does not exist ERROR
The file .dockerignore contains the following
!rembg
!setup.py
!setup.cfg
!requirements.txt
!requirements-cpu.txt
!requirements-gpu.txt
!versioneer.py
!README.md
Any idea why I can't copy the files? I also tried the following without sucess
COPY ./u2netp.onnx ~/.u2net/u2netp.onnx
COPY ./u2net.onnx ~/.u2net/u2net.onnx
COPY ./u2net_human_seg.onnx ~/.u2net/u2net_human_seg.onnx
COPY ./u2net_cloth_seg.onnx ~/.u2net/u2net_cloth_seg.onnx
EDIT:
I an deploying the container to google cloud run using the command gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT2}/${SAMPLE2}
EDIT 2:
As I am alrady copying everything with COPY . . , I tried to move the files with the following commands
RUN mv u2netp.onnx ~/.u2net/u2netp.onnx
RUN mv u2net.onnx ~/.u2net/u2net.onnx
RUN mv u2net_human_seg.onnx ~/.u2net/u2net_human_seg.onnx
RUN mv u2net_cloth_seg.onnx ~/.u2net/u2net_cloth_seg.onnx
But I got an error
Step 18/24 : RUN mv u2netp.onnx ~/.u2net/u2netp.onnx
Running in 423d1e0e1a0b
mv: cannot stat 'u2netp.onnx': No such file or directory
As i'm limited to use docker 1.xxx instead of 17x on my cluster, I need some help on how to convert this multi stage build to a valid build for the older docker version.
Could someone help me?
FROM node:9-alpine as deps
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
FROM deps as test
RUN rm -r ./prod_node_modules \
&& npm run lint
FROM node:9-alpine
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
COPY --from=deps /app .
COPY --from=deps /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Currently it gives me error on "FROM node:9-alpine as deps"
"FROM node:9-alpine as deps" means you are defining an intermediate image that you will be able to COPY from COPY --from=deps.
Having a single image means you don't need to COPY --from anymore, and you don't need "as deps" since everything happens in the same image (which will be bigger as a result)
So:
FROM node:9-alpine
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
RUN rm -r ./prod_node_modules \
&& npm run lint
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
RUN cp -r /app .
RUN cp -r /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Only one FROM here.
I have the following dockerfile:
FROM haproxy:alpine
RUN apk --update add bash && apk --no-cache add dos2unix rsyslog supervisor wget curl ruby which py-setuptools py-pip && pip install awscli && chmod +x /*.sh
COPY *haproxy.cfg /etc/
COPY supervisord.ini /etc/
COPY rsyslog.conf /etc/
COPY entrypoint.sh /
RUN dos2unix /entrypoint.sh && apt-get --purge remove -y dos2unix
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 9999
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisord.ini"]
However, when I build this I get:
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
dos2unix (missing):
required by: world[dos2unix]
I can see the package exists here though: https://pkgs.alpinelinux.org/packages?name=dos2unix&branch=&repo=&arch=&maintainer=
What am I doing wrong?
From your own link, dos2unix is (at this time, February 2017) only in testing, not in main or community. From the relevant documentation --
If you only have the main repository enabled in your configuration, apk will not include packages from the other repositories. To install a package from the edge/testing repository without changing your repository configuration file, use the command below. This will tell apk to use that particular repository.
apk add cherokee --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted
In this case, you would want to substitute dos2unix for cherokee.