Issues with running keras model in docker - docker

I wrote a python script for a keras model which uses tensorflow as backend. I tried to run this model in docker. But when I ran it, it always showed "killed". Is this because of a memory issue that my docker doesn't have that much of space to run my python script? Any suggestions are greatly appreciated.
I checked my python script and it ran well in other environments.
The data I use to feed the model is very big. When I use a smaller-size data, the model works in the container. Any suggestions on how to use the large data set?
This is my dockerfile that I used to build up the image:
FROM tensorflow/tensorflow:latest
RUN pip install pandas
RUN pip install numpy
RUN pip install sklearn
RUN pip install matplotlib
RUN pip install keras
RUN pip install tensorflow
RUN mkdir app
COPY . /app
CMD ["python", “app/model2-keras-model.py”]
when I ran it in the container, this is what I got:

Related

Docker Container Can't Find Package Even after install

I have the following repository structure (I know the names are stupid):
dummy/
main/
hello.py
requirements.txt
yips/
yips/
_init_.py
lol.py
setup.py
Dockerfile
The idea is to run the program hello.py, which imports a method from lol.py in the Yips library. For testing purposes, I import sklearn in the lol.py despite not using it. My Dockerfile looks like the following:
FROM python:3.9-bullseye
WORKDIR /yips
COPY yips/ ./
RUN pip3 install .
WORKDIR /main
COPY ./main/ .
RUN pip3 install --no-cache-dir -r ./requirements.txt
CMD ["python3", "hello.py"]
Requirements.txt has both sklearn and numpy, which is used in hello.py.
I have tried running the docker image and it complains that it cannot find sklearn and for what its worth, when I do not import it, everything works as fine (so there is not an issue with the numpy import in hello.py). I have also tried adding a direct call to pip install sklearn before installing my yips library. Does anyone have any insight on how to fix this?
You might have put scikit in your requirements.txt when you should include scikit-learn.
How to fix the error for the main use cases

installing python packages on kubernetes using helm charts

I am trying to use some external python packages using pip which would let me use snowflake on apache airflow.
I have a dockerfile and I am using helm chats to install airflow.
Now I need to add some python dependencies to integrate snowflake and airflow and I have two ways of doing this.
Idea 1:
Adding python packages to docker file using requirements.txt file which will have my pip packages and then docker build using this dockerfile
Idea 2:
Adding python packages to values.yaml file and using this to upgrade my helm chart for airflow so that it installs airflow and these packages.
I tried these two and it doesn't seem to work. I don't see my packages.
Are there any alternative or recommended ways of doing this?
I could solve this by updating the dockerfile as other users suggested above.
add your python packages to a requirements.txt file and save it to a folder ( your working directory)
FROM apache/airflow:latest
USER airflow
WORKDIR C:/Users/my_folder_which_has_requirements.txt
COPY requirements.txt ./
RUN pip install -r requirements.txt
You can also do this without requirements.txt like
FROM apache/airflow:latest
USER airflow
RUN pip install "package1" "package2" "package3"

persistent pip install in rapids.ai docker container

This is probably a really stupid question, but one has got to start somewhere. I am playing with NVDIA's rapids.ai gpu-enhanced docker container, but this (presumably by design) does not come with pytorch. Now, of course, I can do a pip install torch torch-ignite every time, but this is both annoying and resource-consuming (and pytorch is a large download). What is the approved method for persisting a pip install in a container?
Create a new Dockerfile that builds a new image based on the existing one:
FROM the/rapids-ai/image
RUN pip install torch torch-ignite
And then
$ ls Dockerfile
Dockerfile
$ docker build -t myimage .
You can now do:
$ docker run myimage

"No module named PIL" after "RUN pip3 install Pillow" in docker container; neither PIL nor Pillow present in dist-packages directory

I'm following this SageMaker guide and using the 1.12 cpu docker file.
https://github.com/aws/sagemaker-tensorflow-serving-container
If I use the requirements.txt file to install Pillow, my container works great locally, but when I deploy to SageMaker, 'pip3 install' fails with an error indicating my container doesn't have internet access.
To work around that issue, I'm trying to install Pillow in my container before deploying to SageMaker.
When I include the lines "RUN pip3 install Pillow" and "RUN pip3 show Pillow" in my docker file, when building, I see output saying "Successfully installed Pillow-6.2.0" and the show command indicates the lib was installed at /usr/local/lib/python3.5/dist-packages. Also running "RUN ls /usr/local/lib/python3.5/dist-packages" in the docker files shows "PIL" and "Pillow-6.2.0.dist-info" in dist-packages, and the PIL directory includes many code files.
However, when I run my container locally, trying to import in python using "from PIL import Image" results in error "No module named PIL". I've tried variations like "import Image", but PIL doesn't seem to be installed in the context in which the code is running when I start the container.
Before the line "from PIL import Image", I added "import subprocess" and 'print(subprocess.check_output("ls /usr/local/lib/python3.5/dist-packages".split()))'
This ls output matches what I get when running it in the docker file, except "PIL" and "Pillow-6.2.0.dist-info" are missing. Why are those two in /usr/local/lib/python3.5/dist-packages when I run the docker file but not when my container is started locally?
Is there a better way to include Pillow in my container? The referenced Github page also shows that I can deploy libraries by including the files (in code/lib of model package), but to get files compatible with Ubuntu 16.04 (which the docker container uses; I'm on a Mac), I'd probably copy them from the docker container after running "RUN pip3 install Pillow" in my docker file, and it seems odd that I would need to get files from the docker container to deploy to the docker container.
My docker file looks like:
ARG TFS_VERSION
FROM tensorflow/serving:${TFS_VERSION} as tfs
FROM ubuntu:16.04
LABEL com.amazonaws.sagemaker.capabilities.accept-bind-to-port=true
COPY --from=tfs /usr/bin/tensorflow_model_server /usr/bin/tensorflow_model_server
# nginx + njs
RUN \
apt-get update && \
apt-get -y install --no-install-recommends curl && \
curl -s http://nginx.org/keys/nginx_signing.key | apt-key add - && \
echo 'deb http://nginx.org/packages/ubuntu/ xenial nginx' >> /etc/apt/sources.list && \
apt-get update && \
apt-get -y install --no-install-recommends nginx nginx-module-njs python3 python3-pip python3-setuptools && \
apt-get clean
RUN pip3 install Pillow
# cython, falcon, gunicorn, tensorflow-serving
RUN \
pip3 install --no-cache-dir cython falcon gunicorn gevent requests grpcio protobuf tensorflow && \
pip3 install --no-dependencies --no-cache-dir tensorflow-serving-api
COPY ./ /
ARG TFS_SHORT_VERSION
ENV SAGEMAKER_TFS_VERSION "${TFS_SHORT_VERSION}"
ENV PATH "$PATH:/sagemaker"
RUN pip3 show Pillow
RUN ls /usr/local/lib/python3.5/dist-packages
I've tried installing Pillow on the same line as cython and other dependencies, but the result is the same...those dependencies are in /usr/local/lib/python3.5/dist-packages both at the time the container is built and when the container is started locally, while "PIL" and "Pillow-6.2.0.dist-info" are only present when the container is built.
Apologies for the late response.
If I use the requirements.txt file to install Pillow, my container works great locally, but when I deploy to SageMaker, 'pip3 install' fails with an error indicating my container doesn't have internet access.
If restricted internet access isn't a requirement, then you should be able to enable internet access by making enable_network_isolation=False when instantiating your Model class in the SageMaker Python SDK, as shown here: https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/model.py#L85
If restricted internet access is a requirement, this means that you will need to either install your dependencies in your own container beforehand or make use of the packaging as you mentioned in your correspondence.
I have copied your provided Dockerfile and created an image to run as an image in order to reproduce the error you are seeing. I was not able to reproduce the error as quoted below:
However, when I run my container locally, trying to import in python using "from PIL import Image" results in error "No module named PIL". I've tried variations like "import Image", but PIL doesn't seem to be installed in the context in which the code is running when I start the container.
I created a similar Docker image and ran it as a container with the following command:
docker run -it --entrypoint bash <DOCKER_IMAGE>
from within the container I started a Python3 session and ran the following commands locally without error:
root#13eab4c6e8ab:/# python3 -s
Python 3.5.2 (default, Oct 8 2019, 13:06:37)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from PIL import Image
Can you please provide the code for how you're starting your SageMaker jobs?
Please double check that the Docker image you have created is the one being referenced when starting your SageMaker jobs.
Please let me know if there is anything I can clarify.
Thanks!

Is there a way to use conda to install libraries in a Docker image?

I'm trying to install some libraries (specifically pytorch) in my docker image. I have a Dockerfile that installs anaconda properly, but now I'd like to use conda to install a few other things in the image. Is there a way to do this? I've tried
RUN /opt/anaconda/bin/conda install -y pytorch torchvision
And anaconda is installed in the right location. Am I doing this the right way?
First, check if the way you have added/installed anaconda in your own image reflects the ContinuumIO/docker-images/anaconda.
Second, you can test the installation dynamically, as the README recommends
docker run -it yourImage /bin/bash -c "/opt/conda/bin/conda install ...
If that is working correctly, you can do the same in your Dockerfile.

Resources