I am very new to docker and could not figure out how to search google to answer my question.
I am using windows OS
I've created docker image using
FROM python:3
RUN apt-get update && apt-get install -y python3-pip
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN pip3 install jupyter
RUN useradd -ms /bin/bash demo
USER demo
WORKDIR /home/demo
ENTRYPOINT ["jupyter", "notebook", "--ip=0.0.0.0"]
and it worked fine. Now I've tried to create it again but with different libraries in requirements.txt it fails to build, it outputs ERROR: Could not find a version that satisfies requirement apturl==0.5.2. When I search what apturl is, I think we need ubuntu OS to install it.
So my question is how do you create a jupyter notebook server using docker with ubuntu libraries? (I am using Windows OS). Thanks!
try upgrading pip.
RUN pip install -U pip
RUN pip3 install -r requirements.txt
Related
I have some datascience projects running in docker containers (I use k8s). I am trying to speed up my code by using pypy as my interpreter, but this has been a nightmare.
My OS is ubuntu 20.04
The main libraries I need are:
SQLAlchemy
SciPy
gRPC
For grpc I'm using grpclib, and for SciPy I'm installing it using the miniconda docker image.
My final hurdle is installing psycopg2cffi to make SQLAlchemy work, but after a couple of all-nighters I still haven't managed to make this work. I can install it, but when I run I get a SCRAM authentication problem that I've seen others also get.
Is there a pypy docker file someone has already created that has datascience libraries in it? Doesn't seem like it would be something no one has tried to be before..
Here's by dockerfile so far:
FROM conda/miniconda3 as base
# Setp conda env with pypy3 as the interpreter
RUN conda create -c conda-forge -n pypy-env pypy python=3.8 -y
ENV PATH="/usr/local/envs/pypy-env/bin:$PATH"
RUN pypy -m ensurepip
RUN apt-get -y update && \
apt-get -y install build-essential g++ python3-dev libpq-dev
# Install big/annoying libraries first
RUN pip install psycopg2cffi -y
RUN conda install scipy -y
RUN pip install numpy
WORKDIR /home
COPY ./core/requirements/requirements.txt .
COPY ./core/requirements/basic_requirements.txt .
RUN pip install -r ./requirements.txt
FROM python:3.8-slim as final
WORKDIR /home
COPY --from=base /usr/lib/x86_64-linux-gnu/libpq* /usr/lib/x86_64-linux-gnu/
COPY --from=base /usr/local/envs/pypy-env /usr/local/envs/pypy-env
ENV PATH="/usr/local/envs/pypy-env/bin:$PATH"
COPY .env .env
COPY .src/ .
repo with a few services and in each service, I have the following base code:
FROM python:3.8.13-slim-bullseye
WORKDIR /usr/app
RUN apt-get update
RUN apt-get install default-libmysqlclient-dev build-essential -y
RUN python -m pip install --upgrade pip
RUN pip install pipenv setuptools
This is a little slow to rebuild each time, and sometimes I need to drop all images, so the idea is to know if is possible, to create Dockerfile as a base image and import this from another docker file in order to build these steps only one time locally.
Thanks
I was trying to run a custom version of a Jupiter notebook image on MacOS, just wanted to install a confluent-kafka library in order to use the kafka python client.
I followed the simple instruction provided in the docs. This is the Dockerfile:
FROM jupyter/datascience-notebook:33add21fab64
# Install in the default python3 environment
RUN pip install --quiet --no-cache-dir confluent-kafka && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
The build works fine but when running this is the error I am getting:
[FATAL tini (8)] exec -- failed: No such file or directory
Trying to look online but haven't found anything useful.
Any help?
I am still not sure why the error and would be curious to understand that better. In the meanwhile I got the notebook working on docker using another base image.
Here it is the Dockerfile:
FROM jupyter/minimal-notebook
RUN pip3 install confluent-kafka
I am trying to install tesseract-ocr to docker from a Dockerfile. When I build the Dockerfile everything looks normal and i get no errors but when I run the container tesseract is not installed.
If I access the container using sudo docker exec -t -i <container_id> /bin/bash and manually install tesseract using apt-get install -y tesseract-ocr-all it installs and works perfectly. Why doesn't it work when I try to install it during the build process?
My Dockerfile looks like this:
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y tesseract-ocr-all
RUN tesseract --version
FROM python:3.7
WORKDIR ocr
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
Thanks!
It looks like you are leveraging Docker multi-stage builds without realizing it.
When you put FROM python:3.7, you essentially throw away everything you have done above that, since you start a new stage.
The easiest solution I can see is to move
RUN apt-get update \
&& apt-get install -y tesseract-ocr-all
RUN tesseract --version
into the FROM python:3.7 stage, and remove the FROM ubuntu:20.04 stage.
You need to switch your user, since you likely don't have permission to run those commands. Something like this should work:
USER root
RUN apt-get update \
&& apt-get install -y tesseract-ocr-all
USER <switch back to previous user>
You'll need to figure out what the default user is to switch back, which you can probably find in the Ubuntu docs or using whoami.
I'm trying to run the below Dockerfile contents on ubuntu image.
FROM ubuntu
RUN apt-get update
RUN apt-get install -y python
RUN apt-get install -y python-pip
RUN pip install flask
COPY app.py /opt/app.py
ENTRYPOINT FLASK_APP=/opt/app.py flask run --host=0.0.0.0
But i'm getting the below error at layer 3.
Step 1/7 : FROM ubuntu
---> 549b9b86cb8d
Step 2/7 : RUN apt-get update
---> Using cache
---> 78d87d6d9188
Step 3/7 : RUN apt-get install -y python
---> Running in a256128fde51
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package python
Athough while I run the below command individually
sudo apt-get install -y python
It's successfully getting installed.
Can anyone please help me out.
Note: I'm working behind organization proxy.
Step 2/7 : RUN apt-get update
---> Using cache
You should run apt-get update and apt-get install in the same RUN instruction as follows
RUN apt-get update && apt-get install -y python
Each instruction in a Dockerfile will create separate layer in the image and the layers are cached. So the apt-get update might just use cache and not even run. This has happened in your case as well. You can see the line ---> Using cache in your logs. You can use docker build --no-cache to make docker rebuild all the layers without using cache.
You can instead just use python:3 official image as base image to run your python apps.
In the python installation tutorial there is a package name python3.x for Debian.
I think this is your case. I tested it in the Docker and this is right configuration
looks like
From ubuntu:20.04
RUN apt-get update && apt-get -y install python3.8 python3.8-dev
I feel you should rather use Python3 's image instead of using ubuntu and then installing it.
FROM python:3
RUN apt-get update && apt-get install -y python3-pip #You don't need to install pip, because it is already there in python's image
RUN pip install -r requirements.txt
COPY . /usr/src/apps #This you can change
WORKDIR /usr/src/apps/ #this as well
CMD ["python","app.py"]
Try the following, it worked for me
apt-get install python3-pip
And then for installing flask you will have to use
pip3 install flask