conda: not found - Docker container do not run export PATH=~/miniconda3/bin:$PATH - docker

I have the following Dockerfile:
FROM --platform=linux/x86_64 nvidia/cuda:11.7.0-devel-ubuntu20.04
COPY app ./app
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y upgrade && apt-get install -y apt-utils
RUN apt-get install -y \
net-tools iputils-ping \
build-essential cmake git \
curl wget vim \
zip p7zip-full p7zip-rar \
imagemagick ffmpeg \
libomp5
# RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
COPY Miniconda3-latest-Linux-x86_64.sh .
RUN chmod guo+x Miniconda3-latest-Linux-x86_64.sh
RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3
RUN export PATH=~/miniconda3/bin:$PATH
RUN conda --version
RUN conda update -n base conda
RUN conda create -y --name servier python=3.6
RUN conda activate servier
RUN conda install -c conda-forge rdkit
CMD ["bash"]
When I run: docker image build -t image_test_cuda2 . it breaks in the RUN conda --version.
The error is RUN conda --version: ... /bin/sh: 1: conda: not found. The problem is that RUN export PATH=~/miniconda3/bin:$PATH is not working. It is not creating conda link in the PATH.
If I build the image until RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3 and manually I get access to the container using docker exec -it <id> /bin/bash and then from the #/ manually I run the same command #/export PATH=~/miniconda3/bin:$PATH it works good. If I manually run the next command inside the container RUN conda update -n base conda it works good.
The conclusion is that it seems that the command RUN export PATH=~/miniconda3/bin:$PATH is not working in Dockerfile - docker image build. How to solve this issue?

Related

Docker exec container pytest fail

I am using the dev image so i can have the cuda compiler, now the issue is that when running the CI, as below I get that error, but if I build the standard container (commented line in dockerfile).
CONTAINER=$(docker run -d gpu-test)
docker exec $CONTAINER pytest
OCI runtime exec failed: exec failed: unable to start container process: exec: "pytest": executable file not found in $PATH: unknown
Dockerfile:
# Pulls the basic Image from NVIDIA repository
FROM rapidsai/rapidsai-dev:22.04-cuda11.5-devel-ubuntu20.04-py3.9
# FROM rapidsai/rapidsai:22.04-cuda11.5-runtime-ubuntu20.04-py3.9
# Updates OS libraries
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
--fix-missing git python3-setuptools python3-pip build-essential libcurl4-gnutls-dev \
zlib1g-dev rsync vim nano cmake tabix
# Install libraries needed in the examples
RUN /opt/conda/envs/rapids/bin/pip install \
scanpy==1.9.1 wget pytabix dash-daq \
dash-html-components dash-bootstrap-components dash-core-components \
utils pytest
RUN /opt/conda/envs/rapids/bin/pip install \
torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
WORKDIR /workspace
ENV HOME /workspace

How do I install gecko driver and firefox on my docker image apache/airflow:2.1.4

I am trying to use Selenium on one of my airflow tasks. I have airflow running on apache/airflow:2.1.4 docker image.
I get the following error when I use selenium in my airflow task (since I'm missing firefox)
FileNotFoundError: [Errno 2] No such file or directory: 'firefox': 'firefox'
How would I go about adding geckodriver and firefox to the airflow image?
I have my docker-compose build the following Dockerfile for airflow,
FROM apache/airflow:2.1.4
WORKDIR /python_dependencies
COPY ./requirements.txt .
RUN pip3 install -r requirements.txt
apache/airflow:2.1.4 is based on debian, so you just need to use apt-get install firefox-esr to get firefox command, while download pre-built binary from geckodriver github to install geckodriver.
Dockerfile:
FROM apache/airflow:2.1.4
USER root
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates curl firefox-esr \
&& rm -fr /var/lib/apt/lists/* \
&& curl -L https://github.com/mozilla/geckodriver/releases/download/v0.30.0/geckodriver-v0.30.0-linux64.tar.gz | tar xz -C /usr/local/bin \
&& apt-get purge -y ca-certificates curl
USER airflow
Verify:
$ docker build -t abc:1 .
$ docker run --rm -it --entrypoint=which abc:1 firefox
/usr/bin/firefox
$ docker run --rm -it --entrypoint=which abc:1 geckodriver
/usr/local/bin/geckodriver

docker returns err_empty_response when backend script takes too long

I am running a FastAPI application in docker. The backend is composed of multiple .py scripts, which train several machine learning models. The FastAPI returns the results. I have docker running and everything is just fine. However, when the modeling takes longer (by using several hyperparameter search loops), I receive an err_empty_response from my dockerized App. Without docker everything is fine. I suppose, it is some timeout issue.
I have added "shutdown-timeout": 600 in the config.v2.json file in var/lib/docker/containers (I am on ubuntu 18.04), but this did not help.
This is my dockerfile:
FROM ubuntu:18.04
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/Moscow
RUN apt-get update && apt-get install -y curl wget gcc build-essential
#install python 3.9
RUN apt update
RUN apt install software-properties-common -y
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt install python3.9 -y
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN ln -s /usr/bin/python3.9 /usr/bin/python
# install conda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-4.5.12-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda
# create env with python 3.9
RUN /opt/conda/bin/conda create -y -n myenv python=3.9
RUN apt install -y -q build-essential python3-pip python3-dev
RUN pip3 install -U pip setuptools wheel
#install python environment/libraries
COPY requirements.txt /app/requirements.txt
ENV PATH=/opt/conda/envs/myenv/bin:$PATH
RUN pip3 install gunicorn uvloop httptools
RUN pip3 install -r /app/requirements.txt
RUN pip3 install -U kaleido
COPY projectfolder/ /projectfolder/
RUN ls -la /projectfolder/*
WORKDIR /projectfolder
EXPOSE 80
ENTRYPOINT /opt/conda/envs/myenv/bin/gunicorn \
-b 0.0.0.0:80 \
-w 4 \
-k uvicorn.workers.UvicornWorker main:app \
--chdir /projectfolder
This is a sample FastAPI app, just for demo. The sleep time mimicks the timeout:
from fastapi import FastAPI
import uvicorn
app = FastAPI()
#app.get("/")
async def root():
time.sleep(150)
return {"message": "Hello World from docker"}
if __name__ == "__main__":
uvicorn.run(app)
I launch docker with
sudo docker build -t myproject .
sudo docker run -it --rm --name my-running-app -p 80:80 myproject
and open localhost in chrome.
So the question is: how can I extend the timeout, if this is the issue (most likely)?
Ok, after experimenting a bit, I found out that the issue was not with docker timeout, but with FastAPI. To change that, I modified the Entrypoint definiton in the dockerfile and set the timeout setting:
ENTRYPOINT /opt/conda/envs/myenv/bin/gunicorn \
-b 0.0.0.0:80 \
-w 4 \
--timeout 600 \
-k uvicorn.workers.UvicornWorker main:app

GUI menu in docker container freezes (ubuntu parent image)

I've been trying to run a docker container including the esp8266 toolchain and ESP8266_RTOS_SDK.
After the Dockerfile is done the 'Espressif IoT Menu' pops up but freezes instantly and I cant control anything. Screenshot of the menu. I thought maybe I had to run the container endlessly, but it didn't help either. I tried this command: RUN tail -f /dev/null.
What else I thought is that the container might be missing some programs for a terminal?
Here is my Dockerfile (first time working with docker):
FROM ubuntu:latest
# -------------------------- TOOLCHAIN --------------------------------------
WORKDIR /
RUN apt-get update && apt-get install -y software-properties-common
RUN apt update && add-apt-repository universe
RUN apt-get -y install gcc wget git make libncurses-dev flex bison gperf python3 python3-serial python3-pip
RUN mkdir -p downloads esp8266
ADD https://dl.espressif.com/dl/xtensa-lx106-elf-linux64-1.22.0-100-ge567ec7-5.2.0.tar.gz downloads
RUN cd esp8266;tar -xzf /downloads/xtensa-lx106-elf-linux64-1.22.0-100-ge567ec7-5.2.0.tar.gz
ENV PATH=/esp8266/xtensa-lx106-elf/bin:$PATH
# -------------------------- ESP8266_RTOS_SDK ---------------------------------
RUN cd esp8266;git clone https://github.com/espressif/ESP8266_RTOS_SDK.git
ENV IDF_PATH="/esp8266/ESP8266_RTOS_SDK"
RUN ln -s /usr/bin/python3 /usr/bin/python #otherwise python wont be found
ENV TERM xterm #otherwise "terminal unknown"
RUN python3 -m pip install --user -r $IDF_PATH/requirements.txt
RUN cd esp8266;cp -r $IDF_PATH/examples/get-started/hello_world .
RUN cd /esp8266/hello_world;make menuconfig
I run the container with:
sudo docker build -f $(pwd)/dEsp8266 -t espenv .
The guide's I used:
For the toolchain: https://docs.espressif.com/projects/esp8266-rtos-sdk/en/latest/get-started/linux-setup.html
For the RTOS_SDK: https://docs.espressif.com/projects/esp8266-rtos-sdk/en/latest/get-started/index.html#get-started-get-esp-idf

How to fix error occurring during Docker image build: "E: Unsupported file /tmp given on commandline"

I am trying to build an image from Dockerfile and I am getting below error:
E: Unsupported file /tmp given on commandline
This is my dockerfile:
FROM python:3.7-slim-stretch
LABEL version="0.1"
ENV DAEMON_RUN=true
ENV SPARK_VERSION=2.4.4
ENV HADOOP_VERSION=2.7
ENV SCALA_VERSION=2.12.4
ENV SCALA_HOME=/usr/share/scala
ENV SPARK_HOME=/spark
RUN apt-get update -yqq
RUN apt-get install -yqq --no-install-recommends \
wget \
tar \
bash \
vim \
less \
RUN cd "/tmp"
But when i run below line I'm getting mentioned error:
docker build --rm -t test/docker-airflow-spark -f Dockerfile-Spark >.
If i remove the last command : RUN cd "/tmp"
And i try to connect ssh to the container the folder exists
Any ideas?
you need to edit the last line in apt-get command change less \ to less
docker thinks that RUN cd "/tmp" is a parameter for apt-get
anyway you should use WORKDIR if you want to use /tmp for further steps

Resources