I am new to docker container concepts. I have a tensorflow-gpu-jupyter docker image (pulled) that I want to run as another docker image with some additional requirements to install over the original pulled image.
EDIT -------
The base file I am using comes from official tensorflow repo
So here is the head of my Dockerfile located in /home/me/docker/:
FROM tensorflow/tensorflow:latest-gpu-jupyter
COPY . .
WORKDIR .
RUN python3 -m pip install --no-cache --upgrade setuptools pip
# RUN -r requirements.txt # <-- adding seaborn & pandas
EXPOSE 8889
ENTRYPOINT ["jupyter","--ip=0.0.0.0"]
This is my Dockerfile. It builds correctly but when going to port 8889, it doesn't work.. How can I change properly the ENTRYPOINT command line ?
I just want my new image to run as the base image, i.e launching a jupyter notebook.
Yes
You would use that as the base image i.e. the 'from' and create a new image with all your other requirements
Related
Context
I'm juggling between Dockerfile and docker-compose to figure out the best security practice to deploy my docker image and push it to the docker registry so everyone can use it.
Currently, I have a FastAPI application that uses an AWS API token for an AWS Service. I'm trying to figure out a solution that can work in both Docker for Windows (GUI) and Docker for Linux.
In Docker Windows GUI it's well and clear that after I pull the image from the registry I can add API tokens in the environment of the image and spin a container.
I need to know
When it comes to Docker for Linux, I'm trying to figure out a way to build an image with an AWS API token either via Dockerfile or docker-compose.yml.
Things I tried
Followed the solution from this blog
As I said earlier if I do something like that as mentioned in the blog. It's fine for my personal use. A user who pulls my docker image from the registry will also be having my AWS Secrets. How do I handle this situation in a better way
Current state of Dockerfile
FROM python:3.10
# Set the working directory to /app
WORKDIR /src
# Copy the current directory contents into the container at /app
ADD ./ /src
# Install any needed packages specified in requirements.txt
#RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8000
# Run app.py when the container launches
CMD ["python", "main.py"]
I created a multistage docker file where in the base image I prepare anaconda environment with required packages and in the final image I copy the anaconda and install the local package.
I noticed that on every CI build and push all of the layers are recomputed and pushed, including the one big anaconda layer.
Here is how I build it
DOCKER_BUILTKIT=1 docker build -t my_image:240beac6 \
-f docker/dockerfiles/Dockerfile . \
--build-arg BASE_IMAGE=base_image:240beac64 --build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from my_image:latest
docker push my_image:240beac6
ARG BASE_IMAGE
FROM $BASE_IMAGE as base
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
# enable conda
ENV PATH=/root/miniconda3/bin/:${PATH}
COPY --from=base /opt/fast_align/* /usr/bin/
COPY --from=base /usr/local/bin/yq /usr/local/bin/yq
COPY --from=base /root/miniconda3 /root/miniconda3
COPY . /opt/my_package
# RUN pip install --no-deps /opt/my_package
If I leave the last run command commented out, the docker only builds the last COPY (if some file in the context changed) layer.
However, if I try to install it, it invalidates everything.
Is it because, I change the /root/miniconda3 with the pip install?
If so, I am surprised by that, I was hoping the lower RUN commands can't mess up the higher commands.
Is there a way to copy the conda environment from the base image, install the local image in a separate command and still benefit from the caching?
Any help is much appreciated.
One solution, albeit a bit hacky would be to replace the last RUN with CMD and install the package on start of the container. It would be almost instant as the requirements are already installed in the base image.
I developed a machine learning model and integrated it with Flask app.When I try to run the docker image for the app, it says I do not have a GPU access. How should I write a Dockerfile such that I can use "cuda gpu" inside the container ? Below is the current state of Dockerfile.
FROM python:3.9
WORKDIR /myapp
ADD . /myapp
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python","./app.py" ]
You need to use the --gpus argument when executing docker run, check out the documentation.
I did a basic search in the community and could not find a suitable answer, so I am asking here. Sorry if it was asked earlier.
Basically , I am working on a certain project and we keep changing code at a regular interval . So ,we need to build docker image everytime due to that we need to install dependencies from requirement.txt from scratch which took around 10 min everytime.
How can I perform direct change to docker image and also how to configure entrypoint(in Docker File) which reflect changes in Pre-Build docker image
You don't edit an image once it's been built. You always run docker build from the start; it always runs in a clean environment.
The flip side of this is that Docker caches built images. If you had image 01234567, ran RUN pip install -r requirements.txt, and got image 2468ace0 out, then the next time you run docker build it will see the same source image and the same command, and skip doing the work and jump directly to the output images. COPY or ADD files that change invalidates the cache for future steps.
So the standard pattern is
FROM node:10 # arbitrary choice of language
WORKDIR /app
# Copy in _only_ the requirements and package lock files
COPY package.json yarn.lock ./
# Install dependencies (once)
RUN yarn install
# Copy in the rest of the application and build it
COPY src/ src/
RUN yarn build
# Standard application metadata
EXPOSE 3000
CMD ["yarn", "start"]
If you only change something in your src tree, docker build will skip up to the COPY step, since the package.json and yarn.lock files haven't changed.
In my case, I was facing the same, after minor changes, i was building the image again and again.
My old DockerFile
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
COPY . .
so what I did, created a base image file first like this (Avoided the last line, did not copy my code)
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
and then build this image using
docker build -t my_base_img:latest -f base_dockerfile .
then the final Dockerfile
FROM my_base_img:latest
WORKDIR /app
COPY . .
And as my from this image, I was not able to up the container, issues with my copied python code, so you can edit the image/container code, to fix the issues in the container, by this mean i avoided the task of building images again and again.
When my code got fixed, I copied the changes from container to my code base and then finally, I created the final image.
There are 4 Steps
Start the image you want to edit (e.g. docker run ...)
Modify the running image by shelling into it with docker exec -it <container-id> (you can get the container id with docker ps)
Make any modifications (install new things, make a directory or file)
In a new terminal tab/window run docker commit c7e6409a22bf my-new-image (substituting in the container id of the container you want to save)
An example
# Run an existing image
docker run -dt existing_image
# See that it's running
docker ps
# CONTAINER ID IMAGE COMMAND CREATED STATUS
# c7e6409a22bf existing-image "R" 6 minutes ago Up 6 minutes
# Shell into it
docker exec -it c7e6409a22bf bash
# Make a new directory for demonstration purposes
# (note that this is inside the existing image)
mkdir NEWDIRECTORY
# Open another terminal tab/window, and save the running container you modified
docker commit c7e6409a22bf my-new-image
# Inspect to ensure it saved correctly
docker image ls
# REPOSITORY TAG IMAGE ID CREATED SIZE
# existing-image latest a7dde5d84fe5 7 minutes ago 888MB
# my-new-image latest d57fd15d5a95 2 minutes ago 888MB
I am trying to build an Docker image. My Dockerfile is like this:
FROM python:2.7
ADD . /code
WORKDIR /code
RUN pip install -r requirement.txt
CMD ["python", "manage.py", "runserver", "0.0.0.0:8300"]
And my requirement.txt file like this:
wheel==0.29.0
numpy==1.11.3
django==1.10.5
django-cors-headers==2.0.2
gspread==0.6.2
oauth2client==4.0.0
Now, I have a little change in my code, and i need pandas, so i add it in to requirement.txt file
wheel==0.29.0
numpy==1.11.3
pandas==0.19.2
django==1.10.5
django-cors-headers==2.0.2
gspread==0.6.2
oauth2client==4.0.0
pip install -r requirement.txt will install all packages in that file, although almost of them has installed before. My question is how to make pip install pandas only? That will save the time to build image.
Thank you
If you rebuild your image after changing requirement.txt with docker build -t <your_image> ., I guess it cann't be done because each time when docker runs docker build, it'll start an intermediate container from base image, and it's a new environment so pip obviously will need to install all of dependencies.
You can consider to build your own base image on python:2.7 with common dependencies pre-installed, then build your application image on your own base image. Once there's a need to add more dependencies, manually re-build the base image on the previous one with only extra dependencies installed, and then maybe optionally docker push it back to your registry.
Hope this could be helpful :-)