Accessing GPU in Docker for Pytorch Model - docker

I developed a machine learning model and integrated it with Flask app.When I try to run the docker image for the app, it says I do not have a GPU access. How should I write a Dockerfile such that I can use "cuda gpu" inside the container ? Below is the current state of Dockerfile.
FROM python:3.9
WORKDIR /myapp
ADD . /myapp
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python","./app.py" ]

You need to use the --gpus argument when executing docker run, check out the documentation.

Related

Managing secrets via Dockerfile/docker-compose

Context
I'm juggling between Dockerfile and docker-compose to figure out the best security practice to deploy my docker image and push it to the docker registry so everyone can use it.
Currently, I have a FastAPI application that uses an AWS API token for an AWS Service. I'm trying to figure out a solution that can work in both Docker for Windows (GUI) and Docker for Linux.
In Docker Windows GUI it's well and clear that after I pull the image from the registry I can add API tokens in the environment of the image and spin a container.
I need to know
When it comes to Docker for Linux, I'm trying to figure out a way to build an image with an AWS API token either via Dockerfile or docker-compose.yml.
Things I tried
Followed the solution from this blog
As I said earlier if I do something like that as mentioned in the blog. It's fine for my personal use. A user who pulls my docker image from the registry will also be having my AWS Secrets. How do I handle this situation in a better way
Current state of Dockerfile
FROM python:3.10
# Set the working directory to /app
WORKDIR /src
# Copy the current directory contents into the container at /app
ADD ./ /src
# Install any needed packages specified in requirements.txt
#RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8000
# Run app.py when the container launches
CMD ["python", "main.py"]

Check the file contents of a docker image

I am new to docker and I built my image with
docker build -t mycontainer .
The contents of my Dockerfile is
FROM python:3
COPY ./* /my-project/
RUN pip install -r requirements.txt
CMD python /my-project/main.py
Here I get an error:
Could not open requirements file: No such file or directory: 'requirements.txt'
I am not sure if all the files from my local are actually copied to the image.
I want to inspect the contents of the image, is there any way I can do that?
Any help will be appreciated!
When you run docker build, it should print out a line like
Step 2/4 : COPY ./* /my-project/
---> 1254cdda0b83
That number is actually a valid image ID, and so you can get a debugging shell in that image
docker run --rm -it 1254cdda0b83 bash
In particular the place that container starts up will have the exact filesystem, environment variables (from ENV directives), current directory (WORKDIR), user (USER), and so on; directly typing in the next RUN command should get the same result as Docker running it itself.
(In this specific case, try running pwd and ls -l in the debugging shell; does your Dockerfile need a WORKDIR to tell the pip command where to run?)
You just have to get into the project directory and run the pip command.
The best way to do that is to set the WORKDIR /my-project!
This is the updated file
FROM python:3
COPY ./* /my-project/
WORKDIR /my-project
RUN pip install -r requirements.txt
CMD python /my-project/main.py
Kudos!

Dockerize two things and use in one directory

I have an Angular - Flask app that I'm trying to dockerize with the following Dockerfile:
FROM node:latest as node
COPY . /APP
COPY package.json /APP/package.json
WORKDIR /APP
RUN npm install
RUN npm install -g #angular/cli#7.3.9
CMD ng build --base-href /static/
FROM python:3.6
WORKDIR /root/
COPY --from=0 /APP/ .
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["app.py"]
On building the image and running the image, console gives no errors. However, it seems to be stuck. What could be the issue here?
Is it because they are both in different directories?
Since I'm dockerizing Flask as well as Angular, how can I put both in same directory (right now one is in /APP and the other in /root)
OR should I put the two in separate containers and use a docker-compose.yml file?
In that case, how do I write the file? Actually my Flask calls my Angular and both run on same port. So I'm not sure if running in two different containers is a good idea.
I am also providing the commands that I use to build and run the image for reference:
docker image build -t prj .
docker container run --publish 5000:5000 --name prj prj
In the Angular build first stage of the Dockerfile, use RUN instead of CMD.
CMD is for running a command after the final image is built.

Getting "/usr/bin/python: No module named" whenever I run the Docker container

I have a flask API alongside with another class for Machine learning prediction that I need to dockerize so I have a Dockerfile as shown below. I run the container using
sudo docker run -d -p 5000:5000 test
However every-time I run it, it crashes.
I can see that the status of it "Exited (1)" whenever I run
docker ps --all
When I do docker logs containerIDHere I get the reason of the crash as below
/usr/local/bin/python: No module named
Dockerfile
FROM python:3.6
RUN mkdir -p /app
WORKDIR /app
COPY requirements.txt /app
RUN pip install -r ./requirements.txt
COPY Network /app
COPY Train /app
EXPOSE 5000
CMD ["python", "-m" ,"Network.Api"]

How do I use the same python image for several docker containers?

I have a few python services running in docker. My dockerfile looks like this:
FROM python:3.6-slim
WORKDIR /app
COPY . /app
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 80
CMD ["python", "-u", "app.py"]
When I run "docker images -a" command, I can see that each service has its own python parent image:
Is it really so? How do I change dockerfiles to make different services use the same image? Do I need to use docker-compose?
Seems like I can delete intermediate images with
docker save --output image.tar ImageID-or-Name
docker image prune -fa
docker load --input image.tar
I think this is what I need

Resources