i'm new with docker,and i created a docker image with Dockerfile as follows, it's used for a raspberry pi so all the packages are needed, i read the articles of multistage of dockerfile, but i don't understand much, how can i reduce the size of the image to simplify this deployment on raspberry?
FROM continuumio/anaconda3:latest
RUN conda create -y -n dcase2020 python=3.7
SHELL ["conda", "run", "-n", "dcase2020", "/bin/bash", "-c"]
RUN conda install -c conda-forge vim -y
RUN conda install pyaudio
RUN pip install librosa
RUN conda install psutil
RUN pip install psds_eval
RUN conda install -y pandas h5py scipy \
&&conda install -y pytorch torchvision -c pytorch \
&&conda install -y pysoundfile youtube-dl tqdm -c conda-forge \
&&conda install -y ffmpeg -c conda-forge \
&&pip install dcase_util \
&&pip install sed-eval
EXPOSE 80
CMD [“bash”]
Thank you very much!
You are creating a new environment that probably only contains the requirements for your project, so no use in having the huge anaconda base env as extra weight, instead, just switch to a miniconda container like continuumio/miniconda3
Related
I built a docker image based on
nvidia/cuda:11.3.1-cudnn8-runtime-ubuntu20.04
my Dockerfile is like this:
ARG CUDA_VERSION=11.3.1
FROM nvidia/cuda:${CUDA_VERSION}-cudnn8-runtime-ubuntu20.04
ARG PYTORCH_VERSION=1.12.1
# Set a docker label to enable container to use SAGEMAKER_BIND_TO_PORT environment variable if present
LABEL com.amazonaws.sagemaker.capabilities.accept-bind-to-port=true
LABEL maintainer="Change Healthcare"
LABEL dlc_major_version="1"
ENV PATH /opt/conda/bin:$PATH
RUN rm /etc/apt/sources.list.d/*
RUN apt-get update
RUN apt-get install -y curl wget
RUN curl -L -o ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py38_23.1.0-1-Linux-x86_64.sh
RUN chmod +x ~/miniconda.sh
RUN ~/miniconda.sh -b -p /opt/conda
RUN rm ~/miniconda.sh
RUN /opt/conda/bin/conda install -y ruamel_yaml==0.15.100 cython botocore mkl-include mkl
RUN /opt/conda/bin/conda clean -ya
RUN pip install --upgrade pip --trusted-host pypi.org --trusted-host files.pythonhosted.org
RUN ln -s /opt/conda/bin/pip /usr/local/bin/pip
RUN ln -s /opt/conda/bin/pip /usr/local/bin/pip3
RUN ln -s /opt/conda/bin/python /usr/local/bin/python
RUN pip install packaging==20.4 enum-compat==0.0.3
# Conda installs links for libtinfo.so.6 and libtinfo.so.6.2 both
# Which causes "/opt/conda/lib/libtinfo.so.6: no version information available" warning
# Removing link for libtinfo.so.6. This change is needed only for ubuntu 20.04-conda, and can be reverted
# once conda fixes the issue: https://github.com/conda/conda/issues/9680
RUN rm -rf /opt/conda/lib/libtinfo.so.6
WORKDIR /
RUN cd tmp/ \
&& rm -rf tmp*
# Uninstall and re-install torch and torchvision from the PyTorch website
RUN pip uninstall -y torch
RUN /opt/conda/bin/conda install pytorch==${PYTORCH_VERSION} cudatoolkit=11.3 -c pytorch
I start a container based on this image, and in the container, I ran the commands
import torch
torch.cuda.is_available()
it returns False.
If I build an image based on nvidia/cuda:11.3.1-cudnn8-devel-ubuntu20.04
import torch
torch.cuda.is_available()
returns True
But devel image is much larger than runtime image and I want to use runtime as base image. Can anyone help me figure out how to let pytorch find GPU using nvidia/cuda:11.3.1-cudnn8-runtime-ubuntu20.04 as base image?
Regards,
Arthur
I have the following Dockerfile:
FROM --platform=linux/x86_64 nvidia/cuda:11.7.0-devel-ubuntu20.04
COPY app ./app
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y upgrade && apt-get install -y apt-utils
RUN apt-get install -y \
net-tools iputils-ping \
build-essential cmake git \
curl wget vim \
zip p7zip-full p7zip-rar \
imagemagick ffmpeg \
libomp5
# RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
COPY Miniconda3-latest-Linux-x86_64.sh .
RUN chmod guo+x Miniconda3-latest-Linux-x86_64.sh
RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3
RUN export PATH=~/miniconda3/bin:$PATH
RUN conda --version
RUN conda update -n base conda
RUN conda create -y --name servier python=3.6
RUN conda activate servier
RUN conda install -c conda-forge rdkit
CMD ["bash"]
When I run: docker image build -t image_test_cuda2 . it breaks in the RUN conda --version.
The error is RUN conda --version: ... /bin/sh: 1: conda: not found. The problem is that RUN export PATH=~/miniconda3/bin:$PATH is not working. It is not creating conda link in the PATH.
If I build the image until RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3 and manually I get access to the container using docker exec -it <id> /bin/bash and then from the #/ manually I run the same command #/export PATH=~/miniconda3/bin:$PATH it works good. If I manually run the next command inside the container RUN conda update -n base conda it works good.
The conclusion is that it seems that the command RUN export PATH=~/miniconda3/bin:$PATH is not working in Dockerfile - docker image build. How to solve this issue?
I have some datascience projects running in docker containers (I use k8s). I am trying to speed up my code by using pypy as my interpreter, but this has been a nightmare.
My OS is ubuntu 20.04
The main libraries I need are:
SQLAlchemy
SciPy
gRPC
For grpc I'm using grpclib, and for SciPy I'm installing it using the miniconda docker image.
My final hurdle is installing psycopg2cffi to make SQLAlchemy work, but after a couple of all-nighters I still haven't managed to make this work. I can install it, but when I run I get a SCRAM authentication problem that I've seen others also get.
Is there a pypy docker file someone has already created that has datascience libraries in it? Doesn't seem like it would be something no one has tried to be before..
Here's by dockerfile so far:
FROM conda/miniconda3 as base
# Setp conda env with pypy3 as the interpreter
RUN conda create -c conda-forge -n pypy-env pypy python=3.8 -y
ENV PATH="/usr/local/envs/pypy-env/bin:$PATH"
RUN pypy -m ensurepip
RUN apt-get -y update && \
apt-get -y install build-essential g++ python3-dev libpq-dev
# Install big/annoying libraries first
RUN pip install psycopg2cffi -y
RUN conda install scipy -y
RUN pip install numpy
WORKDIR /home
COPY ./core/requirements/requirements.txt .
COPY ./core/requirements/basic_requirements.txt .
RUN pip install -r ./requirements.txt
FROM python:3.8-slim as final
WORKDIR /home
COPY --from=base /usr/lib/x86_64-linux-gnu/libpq* /usr/lib/x86_64-linux-gnu/
COPY --from=base /usr/local/envs/pypy-env /usr/local/envs/pypy-env
ENV PATH="/usr/local/envs/pypy-env/bin:$PATH"
COPY .env .env
COPY .src/ .
I've build an application which is detecting and tracking region of interest. It workers absolutely fine in my local machine, but when I am dockerazing the app I get this error:
"qt.qpa.plugin: Could not find the Qt platform plugin "xcb" in "" " on opencv version 4.5.5.64.
When I downgrade to opencv version 4.1.2.30, it then starts working( Note I want to work on Opencv 4.5.5.64 for tracker related issues).
I am running docker image like: sudo docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix --device /dev/video0 --gpus all --ipc=host myImage.
Its working with older version of opencv but not with 4.5.5.64. I've been banging my head on this issue since last 4 days now, I would really appreciate if I can get some help
FROM nvcr.io/nvidia/pytorch:22.04-py3
RUN rm -rf /opt/pytorch
RUN apt-get update && apt-get autoclean
RUN apt-get update && apt-get install -y --no-install-recommends python3-pip
RUN pip3 install pyqt5
RUN apt-get install -y '^libxcb.*-dev' libx11-xcb-dev libglu1-mesa-dev libxrender-dev libxi-dev libxkbcommon-dev libxkbcommon-x11-dev
RUN apt-get install -y libsm6 libxrender1 libfontconfig1
ENV QT_DEBUG_PLUGINS=1
COPY requirements.txt .
RUN python -m pip install --upgrade pip
RUN pip uninstall -y torch torchvision torchtext Pillow
RUN pip install --no-cache -r requirements.txt albumentations wandb gsutil notebook Pillow>=9.1.0 \
torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
RUN pip install opencv-contrib-python==4.5.5.64
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
ENV OMP_NUM_THREADS=8
below is the dockerfile content.
Docker Codes
# Import Ubuntu image to Docker
docker pull ubuntu:16.04
docker run -it ubuntu:16.04
# Instsall Python3 and pip3
apt-get update
apt-get install -y python3 python3-pip
# Install Selenium
pip3 install selenium
# Install BeautifulSoup4
pip3 install beautifulsoup4
# Install library for PhantomJS
apt-get install -y wget libfontconfig
# Downloading and installing binary
mkdir -p /home/root/src && cd &_
tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2
cd phantomjs-2.1.1-linux-x86_64/bin/
cp phantomjs /usr/local/bin/
# Installing font
apt-get install -y fonts-nanum*
Question
I am trying to import Ubuntu image to docker and install serveral packages inscluding python3, pip3, bs4, and PhantomJs. Then I want to save all this configurations in Docker as "ubuntu-phantomjs". As I am currently on Ubuntu image, anything that starts with 'docker' command do not work. How could I save my image?
Here is the dockerfile:
# Import Ubuntu image to Docker
FROM ubuntu:16.04
# Install Python3, pip3, library and fonts
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
wget libfontconfig \
fonts-nanum*
&& rm -rf /var/lib/apt/lists/*
RUN pip3 install selenium beautifulsoup4
# Downloading and installing binary
RUN mkdir -p /home/root/src && cd &_ tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 && cd phantomjs-2.1.1-linux-x86_64/bin/ && cp phantomjs /usr/local/bin/
Now after saving the code in file named dockerfile, open a terminal in the same directory as the one where file is stored, and run following command:
$ docker build -t ubuntu-phantomjs .
-t means that the target is ubuntu-phantomjs and . means that the context for docker is the current directory. The above dockerfile is not a standard one, and does not follow all good practices mentioned here. You can change this file according to your needs, read the documentations for more help.