I was trying to run a custom version of a Jupiter notebook image on MacOS, just wanted to install a confluent-kafka library in order to use the kafka python client.
I followed the simple instruction provided in the docs. This is the Dockerfile:
FROM jupyter/datascience-notebook:33add21fab64
# Install in the default python3 environment
RUN pip install --quiet --no-cache-dir confluent-kafka && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
The build works fine but when running this is the error I am getting:
[FATAL tini (8)] exec -- failed: No such file or directory
Trying to look online but haven't found anything useful.
Any help?
I am still not sure why the error and would be curious to understand that better. In the meanwhile I got the notebook working on docker using another base image.
Here it is the Dockerfile:
FROM jupyter/minimal-notebook
RUN pip3 install confluent-kafka
Related
I am trying to install the rust compiler within a jupyter docker image. Here in the following the dockerfile:
FROM jupyter/scipy-notebook:python-3.10.5 as base
RUN pip install nb_black
USER root
RUN apt update && apt upgrade
RUN apt install build-essential -y
RUN apt install curl -y
USER jovyan
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
RUN pip install maturin
COPY ./docker_helpers /rust_inst
RUN chmod a+x /rust_inst/setup_rust.sh
RUN /rust_inst/setup_rust.sh
FROM base as prod
CMD ["jupyter", "lab", "--ip", "0.0.0.0"]
and the setup_rust.sh contains just an export statement:
#!/bin/bash
export PATH="$HOME/.cargo/bin:$PATH"
I need to use the root user initially for some permission denied, but after that the jovyan user is able to install all the necessary, or at least I do not get any error from docker at building time.
Does the jupyter docker structure mask the path variable, or make unavailable anything outside jovyan?
How can I have the rust compiler available from a terminal within jupyter?
I realised that the home directory is set to "/home/jovyan" itself, which in docker compose I overwrote with a volume in order to have dynamical code. Once I moved the volume I found the rust compiler in the scope of the jovyan user
I am reading a book about how to use MLflow.
The method is to install MLflow inside a container (not natively)
The dockerfile is
FROM continuumio/miniconda3
RUN pip install mlflow>=1.18.0 \
&& pip install numpy \
&& pip install scipy \
&& pip install pandas \
&& pip install scikit-learn \
&& pip install cloudpickle \
&& pip install pandas_datareader==0.10.0 \
&& pip install yfinance
So I build this with docker build -t stockpred -f Dockerfile .
Then I run it with docker run -v $(pwd):/workfolder -it --rm stockpred
So I am inside the container, mlflow is installed there and I do:
mlflow run .
2022/06/05 08:55:12 ERROR mlflow.cli: === Could not find Docker executable. Ensure Docker is installed as per the instructions at https://docs.docker.com/install/overview/. ===
What does this mean? MLflow requires to have docker installed inside the docker container? Does that mean that MLflow uses docker?
EDIT:
Reading MLflow tutorial (which uses conda) it seems that in fact Docker has to be installed inside Docker because when I use a MLproject file that uses conda_env and not docker_env mlflow run . seems to work well.
I am very new to docker and could not figure out how to search google to answer my question.
I am using windows OS
I've created docker image using
FROM python:3
RUN apt-get update && apt-get install -y python3-pip
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN pip3 install jupyter
RUN useradd -ms /bin/bash demo
USER demo
WORKDIR /home/demo
ENTRYPOINT ["jupyter", "notebook", "--ip=0.0.0.0"]
and it worked fine. Now I've tried to create it again but with different libraries in requirements.txt it fails to build, it outputs ERROR: Could not find a version that satisfies requirement apturl==0.5.2. When I search what apturl is, I think we need ubuntu OS to install it.
So my question is how do you create a jupyter notebook server using docker with ubuntu libraries? (I am using Windows OS). Thanks!
try upgrading pip.
RUN pip install -U pip
RUN pip3 install -r requirements.txt
I am trying to install the java runtime in a Debian based docker image (mcr.microsoft.com/dotnet/core/sdk:3.1-buster). According to various howtos this should be possible by running
RUN apt update
RUN apt-get install openjdk-11-jre
The latter command comes back with
E: Unable to locate package openjdk-11-jre
However according to https://packages.debian.org/buster/openjdk-11-jre the package does exist. What am I doing wrong?
Unsure from which image your are pulling. I used slim, Dockerfile.
from debian:buster-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN mkdir -p /usr/share/man/man1 /usr/share/man/man2
RUN apt-get update && \
apt-get install -y --no-install-recommends \
openjdk-11-jre
# Prints installed java version, just for checking
RUN java --version
NOTE: If you don't run the mkdir -p /usr/share/man/man1 /usr/share/man/man2 you'll run into dependency problems with ca-certificates, openjdk-11-jre-headless etc. I've been using this fix provided by community, haven't really checked the permanent fix.
If I add
FROM nginx:1.16-alpine
to my Dockerfile, my build breaks with the error:
/bin/sh: pip: not found
I tried sending an update command via :
RUN set -xe \
&& apt-get update \
&& apt-get install python-pip
but then I get the error that apt-get can't be found.
Here is my Dockerfile:
FROM python:3.7.2-alpine
FROM nginx:1.16-alpine
ENV INSTALL_PATH /web
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD gunicorn -b 0.0.0.0:9000 --access-logfile - "web.webhook_server:create_app()"
If I remove that one line:
FROM nginx:1.16-alpine
it all runs fine. But of course, I need nginx.
What could be going wrong here? I'm very confused.
As mentioned in this issue:
Using multiple FROM is not really a feature but a bug [...]
Note that :
- There is discussion to remove support for multiple FROM : #13026
So you should decide for one image that fits you most and then intall the necessary packages you need via RUN apk add. Note that both images you used as base are based themself on alpine linux and you need to use apk instead of apt-get to install packages.
Use "FROM nginx:1.16" instead of "FROM nginx:1.16-alpine". The alpine image doesn't have apt. With "nginx:1.16" you can install your extra packages with apt.
The FROM directive tells the docker daemon to extend from an image. You cannot extend from 2 different images.
Let me know if this helps.