this is my first time running Docker and I am having issues creating the image. This is the code inside my docker file
FROM alpine:latest
ENV PATH /usr/local/bin:$PATH
RUN apk add --no-cache python3 py3-pip
RUN apk add py3-pip && pip3 install --upgrade pip
WORKDIR /backend
COPY . /backend
RUN pip3 install wheel
RUN pip3 install numpy
RUN pip3 --no-cache-dir install -r requirements.txt
Under requirements.txt, I have numpy==1.23.1.
The relevant error codes are
#12 8.606 Building wheel for numpy (pyproject.toml): started
#12 22.30 Building wheel for numpy (pyproject.toml): finished with status 'error'
#12 22.34 ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
------
executor failed running [/bin/sh -c pip3 install numpy]: exit code: 1
I tried searching for solutions but they mentioned that once you upgraded PIP, things should install fine. In this case, they still do not work well.
Do give any advice!
Try pip3 install --extra-index-url https://alpine-wheels.github.io/index numpy isntead. Does that work? If so, I can explain in an answer
The above comment by FlyingTeller provided a solution for the problem.
Related
I am trying to install some packages into my docker environment so I can use them inside a container. But when I am running my Dockerfile I am getting the following error:
ERROR: Could not find a version that satisfies the requirement qt5 (from versions: none)
ERROR: No matching distribution found for qt5
Can someone please help me with this problem?
My Dockerfile:
FROM python:latest
WORKDIR /usr/src/app
RUN python3 -m pip install --upgrade pip
RUN pip3 install qt5
RUN pip3 install pyqt5
COPY ./server.py /app/
COPY ./hinto.py /app/
Note: I already have those packages successfully running on my host machine (Macbook m1)
From
https://pypi.org/project/PyQt5/
The GPL version of PyQt5 can be installed from PyPI:
Install it using:
pip3 install PyQt5
I have some datascience projects running in docker containers (I use k8s). I am trying to speed up my code by using pypy as my interpreter, but this has been a nightmare.
My OS is ubuntu 20.04
The main libraries I need are:
SQLAlchemy
SciPy
gRPC
For grpc I'm using grpclib, and for SciPy I'm installing it using the miniconda docker image.
My final hurdle is installing psycopg2cffi to make SQLAlchemy work, but after a couple of all-nighters I still haven't managed to make this work. I can install it, but when I run I get a SCRAM authentication problem that I've seen others also get.
Is there a pypy docker file someone has already created that has datascience libraries in it? Doesn't seem like it would be something no one has tried to be before..
Here's by dockerfile so far:
FROM conda/miniconda3 as base
# Setp conda env with pypy3 as the interpreter
RUN conda create -c conda-forge -n pypy-env pypy python=3.8 -y
ENV PATH="/usr/local/envs/pypy-env/bin:$PATH"
RUN pypy -m ensurepip
RUN apt-get -y update && \
apt-get -y install build-essential g++ python3-dev libpq-dev
# Install big/annoying libraries first
RUN pip install psycopg2cffi -y
RUN conda install scipy -y
RUN pip install numpy
WORKDIR /home
COPY ./core/requirements/requirements.txt .
COPY ./core/requirements/basic_requirements.txt .
RUN pip install -r ./requirements.txt
FROM python:3.8-slim as final
WORKDIR /home
COPY --from=base /usr/lib/x86_64-linux-gnu/libpq* /usr/lib/x86_64-linux-gnu/
COPY --from=base /usr/local/envs/pypy-env /usr/local/envs/pypy-env
ENV PATH="/usr/local/envs/pypy-env/bin:$PATH"
COPY .env .env
COPY .src/ .
This is the Dockerfile that I am trying to build using Cloud Build.
FROM ubuntu:latest
LABEL MAINTAINER example
WORKDIR /app
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& pip3 install --upgrade pip
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
CMD ["newrelic-admin","run-program","gunicorn","wsgi:app","--bind","0.0.0.0:5000","--workers","2","--threads","4","--worker-class=gthread","--log-level","info"]
Here is the requirements.txt file
numpy==1.18.2
pyarrow==0.17.0
lightgbm==2.3.1
scikit-learn==0.22.2.post1
pandas==1.0.3
scipy==1.4.1
Flask==2.1.0
tqdm==4.43.0
joblib==0.15.1
newrelic==6.2.0.156
google-cloud-storage==1.33.0
gunicorn==20.1.0
Once the build begins, it gets stuck at pyarrow and returns
Installing build dependencies: finished with status 'error'
[91m error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
How can I fix this?
repo with a few services and in each service, I have the following base code:
FROM python:3.8.13-slim-bullseye
WORKDIR /usr/app
RUN apt-get update
RUN apt-get install default-libmysqlclient-dev build-essential -y
RUN python -m pip install --upgrade pip
RUN pip install pipenv setuptools
This is a little slow to rebuild each time, and sometimes I need to drop all images, so the idea is to know if is possible, to create Dockerfile as a base image and import this from another docker file in order to build these steps only one time locally.
Thanks
I'm trying to build a Docker image on Ubuntu 20.04 WSL for Windows 10 and keep running into the following error when Docker gets to the step to run pip3 install:
/bin/sh: 1: pip3: not found
The command '/bin/sh -c pip3 install -r /tmp/requirements.txt' returned a non-zero code: 127
The Dockerfile is:
FROM ubuntu:20.04
COPY bots/art_print.py /bots/
COPY requirements.txt /tmp/
RUN pip3 install -r /tmp/requirements.txt
WORKDIR /bots
CMD ["python3", "art-print-bot"]
I've uninstalled and reinstalled pip3 and verified that it is there with $ which pip3
/usr/bin/pip3
Any ideas as to why the Docker build is not recognizing pip3?
Looks like you may have an issue with your PATH environment variable. Try changing the pip RUN line to:
RUN python3 -m pip install -r /tmp/requirements.txt