I'm trying to build a Docker image on Ubuntu 20.04 WSL for Windows 10 and keep running into the following error when Docker gets to the step to run pip3 install:
/bin/sh: 1: pip3: not found
The command '/bin/sh -c pip3 install -r /tmp/requirements.txt' returned a non-zero code: 127
The Dockerfile is:
FROM ubuntu:20.04
COPY bots/art_print.py /bots/
COPY requirements.txt /tmp/
RUN pip3 install -r /tmp/requirements.txt
WORKDIR /bots
CMD ["python3", "art-print-bot"]
I've uninstalled and reinstalled pip3 and verified that it is there with $ which pip3
/usr/bin/pip3
Any ideas as to why the Docker build is not recognizing pip3?
Looks like you may have an issue with your PATH environment variable. Try changing the pip RUN line to:
RUN python3 -m pip install -r /tmp/requirements.txt
Related
I am working on a Dockerfile to be used with Google Cloud Run.
I'm not getting the command to run.
Here's the (slightly obfuscated) Dockerfile:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["gcloud", "compute", "ssh", "--internal-ip", "our-persist-cluster-py3-prod", "--zone=us-central1-b", "--project", "our-customer-tech-sem-prod", "--", "'ps -ef'", "|", "./checker2.py"]
This tries to run the CMD at the end, but says it can't find the host specified. (Runs fine from the command line outside Docker.)
There were a couple of things wrong at the end (1) typo in the host name (fixed with the help of a colleague) ... then I had to make the CMD into a shell script to get the pipe inside to work correctly.
Here's the final (slightly obfuscated) script:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
RUN mkdir /secrets
COPY secrets/* /secrets
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["./rungcloud.sh"]
My Dockerfile looks like this:
FROM ubuntu:20.04
RUN apt update
RUN apt install -y libpq-dev python3-dev python3-pip
ENTRYPOINT /APP
RUN mkdir -p /APP/OUTPUT
COPY . .
RUN pip3 install -r requirements.txt
CMD ["python3" "Main.py"]
But when i run the container, i get the following message:
/bin/sh: 1: /APP: Permission denied in Ubuntu Docker container
What am i doing wrong?
I am trying to build this public.ecr.aws/lambda/python:3.6 based Dockerfile with a requirements.txt file that contains some libraries that need gcc/g++ to build. I'm getting an error of a missing Python.h file despite the fact that I installed the python development package and /usr/include/python3.6m/Python.h exists in the file system.
Dockerfile
FROM public.ecr.aws/lambda/python:3.6
RUN yum install -y gcc gcc-c++ python36-devel.x86_64
RUN pip install --upgrade pip && \
pip install cyquant
COPY app.py ./
CMD ["app.handler"]
When I build this with
docker build -t redux .
I get the following error
cyquant/dimensions.cpp:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'gcc' failed with exit status 1
Notice, however, that my Dockerfile yum installs the development package. I have also tried the yum package python36-devel.i686 with no change.
What am I doing wrong?
The pip that you're executing lives in /var/lang/bin/pip whereas the python you're installing lives in the /usr prefix
presumably you could use /usr/bin/pip directly to install, but I'm not sure whether that works correctly with the lambda environment
I was able to duplicate the behavior of the AWS Lambda functionality without their Docker image and it works just fine. This is the Dockerfile I am using.
ARG FUNCTION_DIR="/function/"
FROM python:3.6 AS build
ARG FUNCTION_DIR
ARG NETRC_PATH
RUN echo "${NETRC_PATH}" > /root/.netrc
RUN mkdir -p ${FUNCTION_DIR}
COPY requirements.txt ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
RUN pip install --upgrade pip && \
pip install --target ${FUNCTION_DIR} awslambdaric && \
pip install --target ${FUNCTION_DIR} --no-warn-script-location -r requirements.txt
FROM python:3.6
ARG FUNCTION_DIR
WORKDIR ${FUNCTION_DIR}
COPY --from=build ${FUNCTION_DIR} ${FUNCTION_DIR}
COPY main.py ${FUNCTION_DIR}
ENV MPLCONFIGDIR=/tmp/mplconfig
ENTRYPOINT ["/usr/local/bin/python", "-m", "awslambdaric"]
CMD ["main.handler"]
I am very new to docker and could not figure out how to search google to answer my question.
I am using windows OS
I've created docker image using
FROM python:3
RUN apt-get update && apt-get install -y python3-pip
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN pip3 install jupyter
RUN useradd -ms /bin/bash demo
USER demo
WORKDIR /home/demo
ENTRYPOINT ["jupyter", "notebook", "--ip=0.0.0.0"]
and it worked fine. Now I've tried to create it again but with different libraries in requirements.txt it fails to build, it outputs ERROR: Could not find a version that satisfies requirement apturl==0.5.2. When I search what apturl is, I think we need ubuntu OS to install it.
So my question is how do you create a jupyter notebook server using docker with ubuntu libraries? (I am using Windows OS). Thanks!
try upgrading pip.
RUN pip install -U pip
RUN pip3 install -r requirements.txt
I am new to docker container and I want to build an image with basic environment. Here is part of my Dockerfile:
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
ARG CTAGS_DIR=~/tools/ctags
ARG RIPGREP_DIR=~/tools/ripgrep
ARG ANACONDA_DIR=~/tools/anaconda
ARG NVIM_DIR=~/tools/nvim
ARG NVIM_CONFIG_DIR=~/.config/nvim
# Install common dev tools
RUN apt-get update --allow-unauthenticated \
&& apt-get install --allow-unauthenticated -y git curl autoconf pkg-config zsh
# Install anaconda
COPY ./packages/Anaconda3-2019.07-Linux-x86_64.sh /tmp/anaconda.sh
RUN chmod u+x /tmp/anaconda.sh \
&& bash /tmp/anaconda.sh -b -p ${ANACONDA_DIR} \
&& rm /tmp/anaconda.sh
ENV PATH=${ANACONDA_DIR}/bin:$PATH
# RUN echo $PATH && ls -l /root/tools/anaconda/bin|grep pip
RUN echo $PATH && ls -l ~/tools/anaconda/bin|grep pip
# Python packages
RUN pip install pynvim jedi pylint
The build process fails at the pip install step complaining that
/bin/sh: 1: pip: not found
The command '/bin/sh -c pip install pynvim jedi pylint' returned a non-zero code: 127
But the output of command
RUN echo $PATH && ls -l ~/tools/anaconda/bin|grep pip
is the following
~/tools/anaconda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
-rwxrwxr-x 1 root root 231 Sep 28 08:19 pip
which suggests that PATH is set and pip is foundable. I am not sure what is the problem here.
The only explanation is that PATH is set but is not correctly set. I do not know why.
Can someone experience explain what happened? What is wrong with my Dockerfile?
I don't see pip in the base image you used in your Dockerfile, you can check the offical Dockerfile, nor in the base image of nvidia/cuda, you can check the base image too 10.0-cudnn7-devel-ubuntu18.04
Installed pip and then try
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
RUN apt update && apt install python3-pip -y
RUN pip3 --version