Creating a dockerfile for a .deb file - docker

I want to create a dockerfile for a debian file extension which runs on ubuntu 18.04. So far I've written this
FROM ubuntu:18.04 AS ubuntu
RUN apt-get update
WORKDIR /Downloads/invisily
RUN apt-get install ./invisily.deb
All phases run fine except the last one. It shows this error:
E: Unsupported file ./invisily.deb given on commandline
The command '/bin/sh -c apt-get install ./invisily.deb' returned a non-zero code: 100
I'm new to docker and cloud so any help would be appreciated thanks!
Edit:
I solved it by putting the dockerfile and the debian file in the same directory and using COPY . ./
This is what my dockerfile looks like now:
FROM ubuntu:18.04 AS ubuntu
RUN apt-get update
WORKDIR /invisily
COPY . ./
USER root
RUN chmod +x a.deb && \
apt-get install a.deb

A few things,
WORKDIR is the working directory inside of your container.
You will need to copy the file invisily.deb from locally to your container when building your Docker image.
You can pass multiple bash commands in the RUN field combining them with multilines.
Try something like this
FROM ubuntu:18.04 AS ubuntu
WORKDIR /opt/invisily
#Drop the invisily.deb in to the same directory as your Dockerfile
#This will copy it from local to your container, inside of /opt/invisily dir
COPY invisily.deb .
RUN apt-get update && \
chmod +x invisily.deb && \
apt-get install invisily.deb

in your WORKDIR there isn't any invisly.deb file, so if you have it you can copy it the container like this:
FROM ubuntu ...
WORKDIR /Downloads/invisily
RUN apt-get update
COPY ./your invisly file path ./
RUN chmod +x ./invisily
RUN apt-get install ./invisily.deb

Related

Error when trying to use COPY in Dockerfile

I have files in the same Directory than a Dockerfile. I am trying to copy those four files to the container, in a directory called ~/.u2net/
This is the Dockerfile code
FROM nvidia/cuda:11.6.0-runtime-ubuntu18.04
ENV DEBIAN_FRONTEND noninteractive
RUN rm /etc/apt/sources.list.d/cuda.list || true
RUN rm /etc/apt/sources.list.d/nvidia-ml.list || true
RUN apt-key del 7fa2af80
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
RUN apt update -y
RUN apt upgrade -y
RUN apt install -y curl software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt install -y python3.9 python3.9-distutils
RUN curl https://bootstrap.pypa.io/get-pip.py | python3.9
WORKDIR /rembg
COPY . .
RUN python3.9 -m pip install .[gpu]
RUN mkdir -p ~/.u2net
COPY u2netp.onnx ~/.u2net/u2netp.onnx
COPY u2net.onnx ~/.u2net/u2net.onnx
COPY u2net_human_seg.onnx ~/.u2net/u2net_human_seg.onnx
COPY u2net_cloth_seg.onnx ~/.u2net/u2net_cloth_seg.onnx
EXPOSE 5000
ENTRYPOINT ["rembg"]
CMD ["s"]
I get the following error
Step 18/24 : COPY u2netp.onnx ~/.u2net/u2netp.onnx COPY failed: file
not found in build context or excluded by .dockerignore: stat
u2netp.onnx: file does not exist ERROR
The file .dockerignore contains the following
!rembg
!setup.py
!setup.cfg
!requirements.txt
!requirements-cpu.txt
!requirements-gpu.txt
!versioneer.py
!README.md
Any idea why I can't copy the files? I also tried the following without sucess
COPY ./u2netp.onnx ~/.u2net/u2netp.onnx
COPY ./u2net.onnx ~/.u2net/u2net.onnx
COPY ./u2net_human_seg.onnx ~/.u2net/u2net_human_seg.onnx
COPY ./u2net_cloth_seg.onnx ~/.u2net/u2net_cloth_seg.onnx
EDIT:
I an deploying the container to google cloud run using the command gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT2}/${SAMPLE2}
EDIT 2:
As I am alrady copying everything with COPY . . , I tried to move the files with the following commands
RUN mv u2netp.onnx ~/.u2net/u2netp.onnx
RUN mv u2net.onnx ~/.u2net/u2net.onnx
RUN mv u2net_human_seg.onnx ~/.u2net/u2net_human_seg.onnx
RUN mv u2net_cloth_seg.onnx ~/.u2net/u2net_cloth_seg.onnx
But I got an error
Step 18/24 : RUN mv u2netp.onnx ~/.u2net/u2netp.onnx
Running in 423d1e0e1a0b
mv: cannot stat 'u2netp.onnx': No such file or directory

Docker Build Failing with Given as STDIN as input

Why Docker image build is getting failed when build with - ?
Host Details
- docker desktop community 2.1.0.5 for Windows
- Windows 10
Dockerfile:
FROM ubuntu:latest
MAINTAINER "rizwan#gm.com"
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
WORKDIR /app
COPY . /app
COPY requirements.txt /app/requirements.txt
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5000
CMD ["python3", "my_service.py","--input-path= /input.csv", "--output-path=/output.csv"]
Folder Structure
-Root
-Application.py
-Dockerfile
-requirements.txt
COMMAND
- Failing : docker build - < Dockerfile
Message: ERROR: Could not open requirements file: [Errno 2] No such
file or directory: 'requirements.txt'
- Successful: docker build .
When you run
docker build - < Dockerfile
it sends only the Dockerfile to the Docker daemon, but no other files. When you tell Docker to COPY a file into the image, you haven't actually sent it the file. It's very similar to including everything in your source tree in the .dockerignore file.
Typically you'll send Docker the current directory as the context directory instead:
docker build . # knows to look for Dockerfile by default

How do I modify my DOCKERFILE to install wget into kubernetes pod?

Right now my DOCKERFILE builds a dotnet image that is installed/updated and run inside its own pod in a Kubernetes cluster.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
ARG DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
ARG DOTNET_CLI_TELEMETRY_OPTOUT=1
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
ARG DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
ARG DOTNET_CLI_TELEMETRY_OPTOUT=1
ARG ArtifactPAT
WORKDIR /src
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
COPY /src .
RUN dotnet restore "./sourceCode.csproj" -s "https://api.nuget.org/v3/index.json"
RUN dotnet build "./sourceCode.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "./sourceCode.csproj" -c
Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "SourceCode.dll"]
EXPOSE 80
The cluster is very bare-bones and does not include either curl nor wget on it. So, I need to get wget or curl installed in the pod/cluster to execute scripted commands that are set to run automatically after deployment and startup are completed. The command to do the install:
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
within the DOCKERFILE seems to do nothing to install in the Kubernetes cluster. As after the build run and deploys if I were to exec into the pod and try to run
wget --help
I get wget doesn't exist. I do not have a lot of experience build DOCKERFILEs so I am truely getting stumped. And I want this automated in the DOCKERFILE as I will not be able to log into environments above our Test to perform the install manually.
its not related to kubernetes nor pods. Actually you cant install anything to kubernetes pod. you can install packages to containers which runs on pod.
Your problem is that, you install wget to your build image. when you use this image below you lost all installed packages. because those packages belong to build image. build, base, final they are different images.you need to copy files explicitly like you did final image. like this
COPY --from=publish /app .
so add command in the below to your final image and you can use wget without no problem.
RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
see this link for more info && best practices.
https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/
Everything between:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
ARG DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
ARG DOTNET_CLI_TELEMETRY_OPTOUT=1
WORKDIR /app
and:
FROM base AS final
is irrelevant. With that line, you start constructing a new image from base which was defined in the first block.
(Incidentally, on the next line, you duplicate the WORKDIR statement needlessly. Also, final is the name you'll use to refer to base, it isn't a name for this finally defined image, so that doesn't really make sense - you don't want to do e.g. COPY --from=final.)
You need to install wget in either the base image, or in the last defined image which you'll actually be running, at the end.

Why when I add nginx to docker do I get the error: /bin/sh: pip: not found

If I add
FROM nginx:1.16-alpine
to my Dockerfile, my build breaks with the error:
/bin/sh: pip: not found
I tried sending an update command via :
RUN set -xe \
&& apt-get update \
&& apt-get install python-pip
but then I get the error that apt-get can't be found.
Here is my Dockerfile:
FROM python:3.7.2-alpine
FROM nginx:1.16-alpine
ENV INSTALL_PATH /web
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD gunicorn -b 0.0.0.0:9000 --access-logfile - "web.webhook_server:create_app()"
If I remove that one line:
FROM nginx:1.16-alpine
it all runs fine. But of course, I need nginx.
What could be going wrong here? I'm very confused.
As mentioned in this issue:
Using multiple FROM is not really a feature but a bug [...]
Note that :
- There is discussion to remove support for multiple FROM : #13026
So you should decide for one image that fits you most and then intall the necessary packages you need via RUN apk add. Note that both images you used as base are based themself on alpine linux and you need to use apk instead of apt-get to install packages.
Use "FROM nginx:1.16" instead of "FROM nginx:1.16-alpine". The alpine image doesn't have apt. With "nginx:1.16" you can install your extra packages with apt.
The FROM directive tells the docker daemon to extend from an image. You cannot extend from 2 different images.
Let me know if this helps.

How to copy a folder from docker to host while configuring custom image

I try to create my own Docker image. After it runs, as the result, the archive folder is created in the container. And I need automatically copy that archive folder to my host.
What is important I have to configure this process before creating an image in my Docker file.
Below you can see what is already in my Docker file:
FROM python:3.6.5-slim
RUN apt-get update && apt-get install -y gcc && apt-get autoclean -y
WORKDIR /my-tests
COPY jobfile.py testbed.yaml requirements.txt rabbit.py ./
RUN pip install --upgrade pip wheel setuptools && \
pip install --no-cache-dir -r requirements.txt && \
rm -v requirements.txt
VOLUME /my-tests/archive
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini","--"]
CMD ["easypy", "jobfile.py","-testbed_file","testbed.yaml"]
While running the container, map any folder on the host to the archive folder of the container, using -v /usr/host_folder:/my-tests/archive . Any thing which is created inside the container at /my-tests/archive will now be available at /usr/host_folder on the host.
Or use the following command to copy the files using scp. You can create a script which first runs the container, then runs the docker exec command.
docker exec -it <container-name> scp /my-tests/archive <host-ip>:/host_path -r

Resources