Docker build failed on MacOS when install python dependencies - docker

Docker build failed.
Here I attached my docker file and command what I used.
Please let me know what can be issue.
I am using macOs.
FROM python:3.9 as base
ARG PIPENV_DEV
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONFAULTHANDLER 1
FROM base AS python-deps
# Install pipenv and compilation dependencies
RUN pip install pipenv
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libmemcached-dev \
libpq-dev
# Install python dependencies in /.venv
COPY Pipfile .
COPY Pipfile.lock .
RUN if [ -z "$PIPENV_DEV" ] ; then PIPENV_VENV_IN_PROJECT=1 pipenv install --deploy ; else PIPENV_VENV_IN_PROJECT=1 pipenv install --dev ; fi
FROM base AS runtime
COPY --from=python-deps /.venv /.venv
ENV PATH="/.venv/bin:$PATH"
WORKDIR /src
COPY . .
COPY scripts/entrypoint.sh entrypoint.sh
COPY scripts/release.sh release.sh
EXPOSE 8000
ENTRYPOINT ["/src/entrypoint.sh"]
CMD ["gunicorn", "-c", "rn_api/config/gunicorn.py", "rn_api.config.wsgi:application"]
I used this command "docker build -t aaaaa ."
Error is happened in "RUN if [ -z "$PIPENV_DEV" ] ; then PIPENV_VENV_IN_PROJECT=1 pipenv install --deploy ; else PIPENV_VENV_IN_PROJECT=1 pipenv install --dev ; fi".
Here is what the error looks like.

Related

Cant install atop on Dockerfile

i have dockerfile with command to run install atop but, i dont know why i am getting error
The command '/bin/bash -o pipefail -c apt install atop' returned a non-zero code: 1
enter image description here
this is my Dockerfile
FROM timbru31/java-node
RUN apt update
RUN apt install atop
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
You need a non-interactive installation. And you can do that on one RUN execution.
RUN apt update && \
apt install -y atop

Cannot run installed tool in Dockerfile even though its there

I installed diesel-cli in a Dockerfile:
FROM alpine:latest
ENV PATH="/root/.cargo/bin:${PATH}"
RUN apk update
RUN apk add postgresql curl gcc musl-dev libpq-dev bash
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
WORKDIR /app
RUN cargo install diesel_cli --no-default-features --features postgres
COPY . .
EXPOSE 8000
CMD [ "docker/entrypoint.sh"]
That works fine. The entrypoint.sh is:
#!/bin/bash
export PATH="/root/.cargo/bin:${PATH}"
ls /root/.cargo/bin/diesel
bash -c "/root/.cargo/bin/diesel setup"
The strange this is that the ls shows that the diesel binary is there. But when running the docker container it still says:
bash: line 1: /root/.cargo/bin/diesel: No such file or directory
I also tried calling diesel right from the Dockerfile with the same result.
Why can't I run diesel this way?
See comment by The Fool!
Using a different base image resolves the problem:
FROM debian:bullseye-slim
ENV PATH="/root/.cargo/bin:${PATH}"
RUN apt update -y
RUN apt install postgresql curl gcc libpq-dev bash -y
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
WORKDIR /app
# This may take a minute
RUN cargo install diesel_cli --no-default-features --features postgres
COPY . .
# provision the database
EXPOSE 8000
CMD [ "docker/entrypoint.sh"]

How to run sudo commands in Docker?

I'm trying to build a docker container containing Sqlite3 and Flask. But Sqlite isn't getting installed because sudo needs a password. How is this problem solved?
The error:
Step 6/19 : RUN sudo apt-get install -y sqlite3
---> Running in 9a9c8f8104a8
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
The command '/bin/sh -c sudo apt-get install -y sqlite3' returned a non-zero code: 1
The Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update && apt-get -y install sudo
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
CMD /bin/bash
RUN sudo apt-get install -y sqlite3
RUN mkdir /db
RUN /usr/bin/sqlite3 /db/test.db
CMD /bin/bash
RUN sudo apt-get install -y python
WORKDIR /usr/src/app
ENV FLASK_APP=__init__.py
ENV FLASK_DEBUG=1
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
sudo is not necessary as you can install everything before switching users.
You should think of consistent layers.
Each version of your image should replace only delta parts.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Please find below an example of what you could use instead of the provided dockerfile.
The idea is that you install dependencies and then run some configuration commands.
Be aware that CMD can be replaced at runtime.
docker run myimage <CMD>
# Base image, based on python installed on debian
FROM python:3.9-slim-bullseye
# Arguments used to run the app
ARG user=docker
ARG group=docker
ARG uid=1000
ARG gid=1000
ARG app_home=/usr/src/app
ARG sql_database_directory=/db
ARG sql_database_name=test.db
# Environment variables, user defined
ENV FLASK_APP=__init__.py
ENV FLASK_DEBUG=1
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
# Install sqlite
RUN apt-get update \
&& apt-get install -y sqlite3 \
&& apt-get clean
# Create app user
RUN mkdir -p ${app_home} \
&& chown ${uid}:${gid} ${app_home} \
&& groupadd -g ${gid} ${group} \
&& useradd -d "${app_home}" -u ${uid} -g ${gid} -s /bin/bash ${user}
# Create sql database directory
RUN mkdir -p ${sql_database_directory} \
&& chown ${uid}:${gid} ${sql_database_directory}
# Switch to user defined by arguments
USER ${user}
RUN /usr/bin/sqlite3 ${sql_database_directory}/${sql_database_name}
# Copy & Run application (by default)
WORKDIR ${app_home}
COPY . .
RUN pip install --no-cache-dir --no-warn-script-location -r requirements.txt
CMD ["python", "-m", "flask", "run"]

Docker build performs instructions under another target (multistage)

I have dummy Dockerfile:
FROM python:3.8-alpine3.13 AS python-base
RUN echo http://mirror.yandex.ru/mirrors/alpine/v3.13/main > /etc/apk/repositories; \
echo http://mirror.yandex.ru/mirrors/alpine/v3.13/community >> /etc/apk/repositories
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONFAULTHANDLER=1 \
PYTHONUNBUFFERED=1 \
PYTHONHASHSEED=random
FROM python-base as builder-base
WORKDIR /app
RUN apk update && apk add --no-cache \
gcc musl-dev postgresql-dev openldap-dev gettext-dev \
libffi-dev openssl-dev python3-dev jpeg-dev zlib-dev musl-locales \
musl-locales-lang postgresql-libs libjpeg graphviz-dev ttf-freefont
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY pythonapline .
RUN python manage.py migrate
RUN python manage.py compilemessages
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
FROM builder-base as development
RUN pip install fastapi
RUN echo 'DEVELOPMENT'
FROM builder-base as test
RUN echo 'TEST'
When I want to build an image under target test I perform the command:
docker build -t myimage --target=test .
But I noticed that instructions under target development are also performed:
....
Step 16/18 : RUN echo 'DEVELOPMENT'
---> Running in 4cfa2ed80350
DEVELOPMENT
Removing intermediate container 4cfa2ed80350
---> 935d770dfe6d
Step 17/18 : FROM builder-base AS test
---> 442c02445aae
Step 18/18 : RUN echo 'TEST'
---> Running in 13432a53bec0
TEST
Removing intermediate container 13432a53bec0
---> 96e80f6d9603
Successfully built 96e80f6d9603
Is that expected? If no what's going on?
The --target option doesn't mean "start with this target" or "only build this target"; it means "stop at this target".
So if you specify --target test, Docker will run the test stage and every stage that precedes it.

Python.h: No such file or directory on Amazon Linux Lambda Container

I am trying to build this public.ecr.aws/lambda/python:3.6 based Dockerfile with a requirements.txt file that contains some libraries that need gcc/g++ to build. I'm getting an error of a missing Python.h file despite the fact that I installed the python development package and /usr/include/python3.6m/Python.h exists in the file system.
Dockerfile
FROM public.ecr.aws/lambda/python:3.6
RUN yum install -y gcc gcc-c++ python36-devel.x86_64
RUN pip install --upgrade pip && \
pip install cyquant
COPY app.py ./
CMD ["app.handler"]
When I build this with
docker build -t redux .
I get the following error
cyquant/dimensions.cpp:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'gcc' failed with exit status 1
Notice, however, that my Dockerfile yum installs the development package. I have also tried the yum package python36-devel.i686 with no change.
What am I doing wrong?
The pip that you're executing lives in /var/lang/bin/pip whereas the python you're installing lives in the /usr prefix
presumably you could use /usr/bin/pip directly to install, but I'm not sure whether that works correctly with the lambda environment
I was able to duplicate the behavior of the AWS Lambda functionality without their Docker image and it works just fine. This is the Dockerfile I am using.
ARG FUNCTION_DIR="/function/"
FROM python:3.6 AS build
ARG FUNCTION_DIR
ARG NETRC_PATH
RUN echo "${NETRC_PATH}" > /root/.netrc
RUN mkdir -p ${FUNCTION_DIR}
COPY requirements.txt ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
RUN pip install --upgrade pip && \
pip install --target ${FUNCTION_DIR} awslambdaric && \
pip install --target ${FUNCTION_DIR} --no-warn-script-location -r requirements.txt
FROM python:3.6
ARG FUNCTION_DIR
WORKDIR ${FUNCTION_DIR}
COPY --from=build ${FUNCTION_DIR} ${FUNCTION_DIR}
COPY main.py ${FUNCTION_DIR}
ENV MPLCONFIGDIR=/tmp/mplconfig
ENTRYPOINT ["/usr/local/bin/python", "-m", "awslambdaric"]
CMD ["main.handler"]

Resources