I am reading a book about how to use MLflow.
The method is to install MLflow inside a container (not natively)
The dockerfile is
FROM continuumio/miniconda3
RUN pip install mlflow>=1.18.0 \
&& pip install numpy \
&& pip install scipy \
&& pip install pandas \
&& pip install scikit-learn \
&& pip install cloudpickle \
&& pip install pandas_datareader==0.10.0 \
&& pip install yfinance
So I build this with docker build -t stockpred -f Dockerfile .
Then I run it with docker run -v $(pwd):/workfolder -it --rm stockpred
So I am inside the container, mlflow is installed there and I do:
mlflow run .
2022/06/05 08:55:12 ERROR mlflow.cli: === Could not find Docker executable. Ensure Docker is installed as per the instructions at https://docs.docker.com/install/overview/. ===
What does this mean? MLflow requires to have docker installed inside the docker container? Does that mean that MLflow uses docker?
EDIT:
Reading MLflow tutorial (which uses conda) it seems that in fact Docker has to be installed inside Docker because when I use a MLproject file that uses conda_env and not docker_env mlflow run . seems to work well.
Related
I am new to docker. I build a docker image using the below code
FROM ubuntu
ADD requirements.txt .
RUN apt-get update && \
apt-get install -y python3 && \
apt install python3-pip -y && \
apt-get install -y libglib2.0-0 \
libsm6 \
libxrender1 \
libxext6
RUN python3 -m pip install -r requirements.txt
COPY my_code /container/home/user
ENV PYTHONPATH /container/home/user/program_dir_1
RUN apt install -y libgl1
WORKDIR /container/home/user
CMD python3 program_dir_1/program_dir_2/program_dir_3/main.py
Now I have a local dir /home/host/local_dir. I want to write/copy all the file that the program creates during the runtime to this local dir.
I am using the below command to bind the volume
docker run -it --volume /home/host/local_dir:/container/home/user my_docker_image
It is giving me an error that
program_dir_1/program_dir_2/program_dir_3/main.py [Errno 2] No such file or directory
When I run the below command
docker run -it --volume /home/host/local_dir:/container/home/user my_docker_image pwd
It is giving the path to the host dir which I linked. It seems like it is also switching the working directory to the host volume to which I am linking.
Can anyone please help me to understand how to copy all the files and data generated using the working directory of the container to the directory of the host?
PS: I have used the below StackOverflow link and tried to understand but didn't get any success
How to write data to host file system from Docker container # Found one solution but got the error that
docker run -it --rm --volume v_mac:/home/host/local_dir --volume v_mac:/container/home/user my_docker_image cp -r /home/host/local_dir /container/home/user
Docker: Copying files from Docker container to host # This is not much of use as I assume the container should be in a running state. In my mine it exited after completion of the program
I was trying to run a custom version of a Jupiter notebook image on MacOS, just wanted to install a confluent-kafka library in order to use the kafka python client.
I followed the simple instruction provided in the docs. This is the Dockerfile:
FROM jupyter/datascience-notebook:33add21fab64
# Install in the default python3 environment
RUN pip install --quiet --no-cache-dir confluent-kafka && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
The build works fine but when running this is the error I am getting:
[FATAL tini (8)] exec -- failed: No such file or directory
Trying to look online but haven't found anything useful.
Any help?
I am still not sure why the error and would be curious to understand that better. In the meanwhile I got the notebook working on docker using another base image.
Here it is the Dockerfile:
FROM jupyter/minimal-notebook
RUN pip3 install confluent-kafka
I am very new to docker and could not figure out how to search google to answer my question.
I am using windows OS
I've created docker image using
FROM python:3
RUN apt-get update && apt-get install -y python3-pip
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN pip3 install jupyter
RUN useradd -ms /bin/bash demo
USER demo
WORKDIR /home/demo
ENTRYPOINT ["jupyter", "notebook", "--ip=0.0.0.0"]
and it worked fine. Now I've tried to create it again but with different libraries in requirements.txt it fails to build, it outputs ERROR: Could not find a version that satisfies requirement apturl==0.5.2. When I search what apturl is, I think we need ubuntu OS to install it.
So my question is how do you create a jupyter notebook server using docker with ubuntu libraries? (I am using Windows OS). Thanks!
try upgrading pip.
RUN pip install -U pip
RUN pip3 install -r requirements.txt
I am writing Dockerfile in which trying to download file from s3 to local using aws cli and ADD those files to docker container as below following this page
FROM nrel/energyplus:8.9.0
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip \
python3-setuptools \
groff \
less \
&& pip3 install --upgrade pip \
&& apt-get clean
RUN pip3 --no-cache-dir install --upgrade awscli
ADD . /var/simdata
WORKDIR /var/simdata
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_REGION=ap-northeast-1
RUN aws s3 cp s3://file/to/path .
RUN mkdir -p /var/simdata
ENTRYPOINT [ "/bin/bash" ]
After running docker build -it <container id>, I expected that I can find the file downloaded from s3 in container, but I could not.
Does anyone know where I can find this file downloaded from s3?
The file should be in /var/simdata.
Since you do not change your work dir before downloading the file from s3, your image uses nrel/energyplus:8.9.0 default work dir.
$ docker pull nrel/energyplus:8.9.0
$ docker inspect --format '{{.ContainerConfig.WorkingDir}}' nrel/energyplus:8.9.0
/var/simdata
A better approach would be download the file to your local from s3 and in Dockerfile copy that file to /var/simdata
COPY file /var/simdata/file
This will give you more control.
Docker Codes
# Import Ubuntu image to Docker
docker pull ubuntu:16.04
docker run -it ubuntu:16.04
# Instsall Python3 and pip3
apt-get update
apt-get install -y python3 python3-pip
# Install Selenium
pip3 install selenium
# Install BeautifulSoup4
pip3 install beautifulsoup4
# Install library for PhantomJS
apt-get install -y wget libfontconfig
# Downloading and installing binary
mkdir -p /home/root/src && cd &_
tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2
cd phantomjs-2.1.1-linux-x86_64/bin/
cp phantomjs /usr/local/bin/
# Installing font
apt-get install -y fonts-nanum*
Question
I am trying to import Ubuntu image to docker and install serveral packages inscluding python3, pip3, bs4, and PhantomJs. Then I want to save all this configurations in Docker as "ubuntu-phantomjs". As I am currently on Ubuntu image, anything that starts with 'docker' command do not work. How could I save my image?
Here is the dockerfile:
# Import Ubuntu image to Docker
FROM ubuntu:16.04
# Install Python3, pip3, library and fonts
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
wget libfontconfig \
fonts-nanum*
&& rm -rf /var/lib/apt/lists/*
RUN pip3 install selenium beautifulsoup4
# Downloading and installing binary
RUN mkdir -p /home/root/src && cd &_ tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 && cd phantomjs-2.1.1-linux-x86_64/bin/ && cp phantomjs /usr/local/bin/
Now after saving the code in file named dockerfile, open a terminal in the same directory as the one where file is stored, and run following command:
$ docker build -t ubuntu-phantomjs .
-t means that the target is ubuntu-phantomjs and . means that the context for docker is the current directory. The above dockerfile is not a standard one, and does not follow all good practices mentioned here. You can change this file according to your needs, read the documentations for more help.