Docker Stuck to build Conv2D model - docker

shortcut = tensorflow.keras.layers.Conv2D(filters, 1, strides=stride, use_bias=False, kernel_initializer='glorot_normal', name=name + '_0_conv')(x)
where filters are 64 stride is 2 and name is conv2_block1
this line works perfectly fine in local machine but gets stuck in docker
Below is my docker file attached.
FROM python:3.7.9-buster
RUN apt-get update \
&& apt-get install -y -qq \
&& apt install cmake -y \
&& apt-get install ffmpeg libsm6 libxext6 -y\
&& apt-get clean
RUN pip3 install --upgrade pip
# Install libraries
COPY ./requirements.txt ./
RUN pip install -r requirements.txt && \
rm ./requirements.txt
RUN pip install fire
# Setup container directories
RUN mkdir /app
# Copy local code to the container
COPY . /app
# launch server with gunicorn
WORKDIR /app
EXPOSE 8080
ENV PORT 8080
ENV FLASK_CONF config.ProductionConfig
# CMD ["gunicorn", "main:app", "--timeout=60", "--preload", \
# "--workers=1", "--threads=4", "--bind :$PORT"]
CMD exec gunicorn --bind :$PORT main:app --preload --workers 9 --threads 5 --timeout 120
And these are my requirements.txt
opencv-python
tensorflow==2.2.0
protobuf==3.20.*
cmake
dlib
numpy==1.16.*

The stuck up issue was due to the exhausting resources for the thread, so removing the --preload argument did the job as the models will be executed on the runtime.

Related

Run 32bit app nn ubuntu 20.04 docker container

I built a ubuntu image using the following Dockerfile:
FROM ubuntu:20.04
# Disable Prompt During Packages Installation
ARG DEBIAN_FRONTEND=noninteractive
# Add 32bit architecture
RUN dpkg --add-architecture i386 \
&& apt-get update \
&& apt-get install -y libc6:i386 libncurses5:i386 libstdc++6:i386 zlib1g:i386
RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
RUN apt-get update && apt-get install -y \
iputils-ping \
python3 python3-pip
# Copy app to container
COPY . /app
WORKDIR /app
# Install pip requirements
COPY requirements.txt /app
RUN python3 -m pip install -r requirements.txt
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["bash"]
I've been trying to run a 32bit app (hence the first run command in the Dockerfile) I have inside the my_app directory using:
./app
but I keep getting
bash: ./app: No such file or directory
I build your docker file with no error, do you have more detail ?

How to run sudo commands in Docker?

I'm trying to build a docker container containing Sqlite3 and Flask. But Sqlite isn't getting installed because sudo needs a password. How is this problem solved?
The error:
Step 6/19 : RUN sudo apt-get install -y sqlite3
---> Running in 9a9c8f8104a8
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
The command '/bin/sh -c sudo apt-get install -y sqlite3' returned a non-zero code: 1
The Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update && apt-get -y install sudo
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
CMD /bin/bash
RUN sudo apt-get install -y sqlite3
RUN mkdir /db
RUN /usr/bin/sqlite3 /db/test.db
CMD /bin/bash
RUN sudo apt-get install -y python
WORKDIR /usr/src/app
ENV FLASK_APP=__init__.py
ENV FLASK_DEBUG=1
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
sudo is not necessary as you can install everything before switching users.
You should think of consistent layers.
Each version of your image should replace only delta parts.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Please find below an example of what you could use instead of the provided dockerfile.
The idea is that you install dependencies and then run some configuration commands.
Be aware that CMD can be replaced at runtime.
docker run myimage <CMD>
# Base image, based on python installed on debian
FROM python:3.9-slim-bullseye
# Arguments used to run the app
ARG user=docker
ARG group=docker
ARG uid=1000
ARG gid=1000
ARG app_home=/usr/src/app
ARG sql_database_directory=/db
ARG sql_database_name=test.db
# Environment variables, user defined
ENV FLASK_APP=__init__.py
ENV FLASK_DEBUG=1
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
# Install sqlite
RUN apt-get update \
&& apt-get install -y sqlite3 \
&& apt-get clean
# Create app user
RUN mkdir -p ${app_home} \
&& chown ${uid}:${gid} ${app_home} \
&& groupadd -g ${gid} ${group} \
&& useradd -d "${app_home}" -u ${uid} -g ${gid} -s /bin/bash ${user}
# Create sql database directory
RUN mkdir -p ${sql_database_directory} \
&& chown ${uid}:${gid} ${sql_database_directory}
# Switch to user defined by arguments
USER ${user}
RUN /usr/bin/sqlite3 ${sql_database_directory}/${sql_database_name}
# Copy & Run application (by default)
WORKDIR ${app_home}
COPY . .
RUN pip install --no-cache-dir --no-warn-script-location -r requirements.txt
CMD ["python", "-m", "flask", "run"]

Docker multistage build Issues

My Current Docker file looks like below . I'm trying to use h2o as a base for my ML model service .Now h2o requires JRE and i'm forced to install the required packages for my flask script . It was as heavy as 1.8 Gig so attempted multi-stage build (script below )
#Original Docker File
FROM h2oai/h2o-open-source-k8s
MAINTAINER rajesh.r6r#gmail.com
USER root
WORKDIR /app
ADD . /app
RUN set -xe \
&& apt-get update -y \
&& apt-get install python-pip -y \
&& rm -rf /var/lib/apt/lists/* # remove the cached files
RUN pip install --upgrade pip
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5005
EXPOSE 54321
ENV NAME World
CMD ["python", "app.py"]
I attempted doing multi-stage builds as follows , but this only results in a python image skipping the h2o part . What am I Missing ?
#Multi-Stage Docker File
FROM h2oai/h2o-open-source-k8s AS baseimage
FROM python:3.7-slim
USER root
WORKDIR /app
ADD . /app
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5005
EXPOSE 54321
ENV NAME World
CMD ["python", "app.py"]

Setting up our Rasa/NLU container, error?

I have this file Dockerfile.nlu
FROM chatbot/spacy:latest
WORKDIR /app
COPY nlu ./agent_nlu
RUN python –m rasa_nlu.train --config agent_nlu/config.yml --data agent_nlu/data/ --path agent_nlu/agent --fixed_model_name default
and I get the error below:
]$ sudo docker build -t nlu:latest -f docker/Dockerfile.nlu .
Sending build context to Docker daemon 9.216kB
Step 1/4 : FROM chatbot/spacy:latest
---> 496dc6a38abb
Step 2/4 : WORKDIR /app
---> Using cache
---> 7f02012c8452
Step 3/4 : COPY nlu ./agent_nlu
COPY failed: stat /var/lib/docker/tmp/docker-builder363868051/nlu: no such file or directory
It doesn't look like Docker can find the nlu directory. Are you sure it exists? Are you sure that you are executing the command from the correct directory?
But you also aren't installing Rasa at all or any of it's requirements. Is there a reason you aren't using the pre-built Rasa images? available here with docs here.
Here is a fully functional Docker file pulled from their repo.
FROM python:3.6-slim
ENV RASA_NLU_DOCKER="YES" \
RASA_NLU_HOME=/app \
RASA_NLU_PYTHON_PACKAGES=/usr/local/lib/python3.6/dist-packages
# Run updates, install basics and cleanup
# - build-essential: Compile specific dependencies
# - git-core: Checkout git repos
RUN apt-get update -qq \
&& apt-get install -y --no-install-recommends build-essential git-core openssl libssl-dev libffi6 libffi-dev curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR ${RASA_NLU_HOME}
COPY . ${RASA_NLU_HOME}
# use bash always
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN pip install -r alt_requirements/requirements_spacy_sklearn.txt
RUN pip install -e .
RUN pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_md-2.0.0/en_core_web_md-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link en_core_web_md en \
&& pip install https://github.com/explosion/spacy-models/releases/download/de_core_news_sm-2.0.0/de_core_news_sm-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link de_core_news_sm de
COPY sample_configs/config_spacy.yml ${RASA_NLU_HOME}/config.yml
VOLUME ["/app/projects", "/app/logs", "/app/data"]
EXPOSE 5000
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start", "-c", "config.yml", "--path", "/app/projects"]

docker-compose: Service 'web' failed to build

I'm trying to install apach2, libapache2-mod-wsgi-py3 and openssl in the container. I've removed some packages and fix typos in Dockerfile but the error is still there.
When i run docker-compose build my setup is running ok, until it hit the part in the Dockerfile where I'm initializing this install and I've got this error:
E: Unable to locate package RUN
E: Unable to locate package apt-get
E: Unable to locate package install
ERROR: Service 'web' failed to build: The command '/bin/sh -c apt-get update && apt-get install -y apache2 libapache2-mod-wsgi-py3 curl dpgk-sig RUN apt-get install -yq openssh-server' returned a non-zero code: 100
You can check the whole installation process here, and this is my Dockerfile:
FROM ubuntu:16.04
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN cat /etc/passwd
RUN cat /etc/group
RUN apt-get update && apt-get install -y \
apache2 \
libapache2-mod-wsgi-py3 \
RUN apt-get install -y openssl
RUN mkdir /var/run/sshd
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code
EXPOSE 80
ADD config/apache/000-default.conf /etc/apache/sites-available/000-default.conf
ADD config/start.sh /tmp/start.sh
ADD src /var/www
RUN chown -R root:www-data /var/www
RUN chmod u+rwx,g+rx,o+rx /var/www
RUN find /var/www -type d -exec chmod u+rwx,g+rx,o+rx {} +
RUN find /var/www -type f -exec chmod u+rw,g+rw,o+r {} +
#essentially: CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"]
CMD ["/tmp/start.sh"]
Can someone explain me why is this happening, and how to fix it, thanks.
Your problem is this line:
libapache2-mod-wsgi-py3 \
The \ is a continuation and the next thing it sees is RUN so is treating that like a package (which it can't find). Lose the \ and it should work fine.

Resources