Trouble installing fuse on Debian stretch Docker image - docker

I am attempting to fuse a dir in a Docker image using gcsfuse. I am using a Debian stretch image, and having trouble working with the fuse package.
I have attempted to install fuse both via apt-get as well as build from the source via the git repo. Both have had their respective problems.
1: After apt-get I receive indication that fuse was successfully installed.
root#a7d6f712fab9:/queue# apt-get install fuse
Reading package lists... Done
Building dependency tree
Reading state information... Done
fuse is already the newest version (2.9.7-1+deb9u2).
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
root#a7d6f712fab9:/queue# apt-get install libfuse-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
libfuse-dev is already the newest version (2.9.7-1+deb9u2)
However when running modprobe fuse (what fails during the gcsfuse mount attempt):
root#a7d6f712fab9:/queue# modprobe fuse
modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.9.125-linuxkit/modules.dep.bin'
modprobe: FATAL: Module fuse not found in directory /lib/modules/4.9.125-linuxkit
2: When using the tar.gz from source, meson is only available as version 0.37, whereas libfuse requires meson > 0.38 to build properly (from earlier versions).
Here's my Dockerfile:
FROM python:3.6-slim
RUN apt-get update \
&& apt-get install -y libfuse-dev \
curl \
gnupg \
apt-utils \
lsb-release \
kmod
RUN export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s` \
&& echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | tee /etc/apt/sources.list.d/gcsfuse.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update \
&& apt-get install -y gcsfuse
COPY . /queue
WORKDIR /queue
I'd like modprobe fuse to actually work, or to understand how I can build fuse/modprobe in a way where the package is identified via modprobe.
Thanks!

Docker containers use the host kernel. So if a kernel module needs loading you have to load it on the host machine and not in Docker.

Related

Docker custom .deb file location

I had to compile a custom kernel to get Ubuntu to run on my laptop and now I'm trying to run docker containers on it.
It generated packages I installed:
linux-headers-5.15.30-25.25custom_5.15.30-25.25custom-1_amd64.deb
linux-image-5.15.30-25.25custom-dbg_5.15.30-25.25custom-1_amd64.deb
linux-image-5.15.30-25.25custom_5.15.30-25.25custom-1_amd64.deb
Now when I try to create docker images I get the following error:
...
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package linux-headers-5.15.30-25.25custom
E: Couldn't find any package by glob 'linux-headers-5.15.30-25.25custom'
E: Couldn't find any package by regex 'linux-headers-5.15.30-25.25custom'
The Dockerfile just pulls an nvidia image and adds some other packages required
FROM nvidia/cuda:11.4.2-devel-ubuntu18.04
ARG COMPILE_GRAPHICS=OFF
ARG DEBIAN_FRONTEND=noninteractive
USER root
RUN \
set -ex && \
apt-key update && \
apt-get update && \
apt-get install -y -q \
build-essential \
software-properties-common \
openssl \
curl && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/apt/archives/ && \
rm -rf /usr/share/doc/ && \
rm -rf /usr/share/man/
...
It is installed on the host PC
~$ sudo dpkg -l | grep linux-headers-5.15.30-25.2
ii linux-headers-5.15.30-25.25custom 5.15.30-25.25custom-1 amd64 Linux kernel headers for 5.15.30-25.25custom on amd64
There's no problem on other machines using the upstream Ubuntu kernel packages.
So guess docker needs the actual package. How can I add a custom location to fetch the packages?
Thanks
I get the feeling you are mixing up what is outside and inside of a container.
Outside - on your host operating system you had to compile a custom kernel to get Linux running. So far so good.
Now you are trying to build a docker container. So the next steps are happening inside the container. Docker is lightweight virtualization therefore the container runs on the same kernel as the host. Since some package dependency is on the kernel's headers, and apt is trying to install the Debian package for them but cannot find them. Seems obvious as you are running a custom kernel and the package is in none well-known repository.
To get out of the situation:
check whether the headers for your kernel are available as .deb
make that .deb available inside the container. This may happen by placing them into the Docker build context.
ensure your Dockerfile installs the .deb before installing whatever needs that .deb. This will prevent it is searched in online repositories.

GPG error in Ubuntu 21.04 after second apt-get update during Docker build

Getting error while building the following Docker file
FROM ubuntu:21.04
RUN apt-get update && \
apt-get install --no-install-recommends -y curl=7.\* && \
apt-get install --no-install-recommends -y unzip=6.\* &&\
rm -rf /var/lib/apt/lists/*
RUN apt-get update && \
mkdir -p /usr/share/man/man1 && \
apt-get install --no-install-recommends -y maven=3.6.3-5 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
The error occurs when the second apt-get update runs.
The error is as follows :-
E: The repository 'http://security.ubuntu.com/ubuntu hirsute-security InRelease' is not signed.
W: GPG error: http://archive.ubuntu.com/ubuntu hirsute InRelease: gpgv, gpgv2 or gpgv1 required for verification, but neither seems installed
E: The repository 'http://archive.ubuntu.com/ubuntu hirsute InRelease' is not signed.
W: GPG error: http://archive.ubuntu.com/ubuntu hirsute-updates InRelease: gpgv, gpgv2 or gpgv1 required for verification, but neither seems installed
E: The repository 'http://archive.ubuntu.com/ubuntu hirsute-updates InRelease' is not signed.
W: GPG error: http://archive.ubuntu.com/ubuntu hirsute-backports InRelease: gpgv, gpgv2 or gpgv1 required for verification, but neither seems installed
E: The repository 'http://archive.ubuntu.com/ubuntu hirsute-backports InRelease' is not signed.
Any kind of help would be appreciated.
That's a bug in the docker / seccomp / glibc interaction: https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1916485
I've run your docker file and get the same error. Playing around with various ways to disable the verification also produced no good results. Neither did removing the version constraints and just installing the latest versions of the tools. The only solution I could find was to downgrade ubuntu to 20.04, but there is no 3.6.3-5 version of maven for that version of the OS, only 3.6.3-1 (afaik).
The closest I could get working is quite different from your desired image:
FROM ubuntu:20.04
RUN apt update && \
apt install --no-install-recommends -y curl=7.\* unzip=6.\* maven=3.6.3-1 && \
apt clean && \
rm -rf /var/lib/apt/lists/* && \
mkdir -p /usr/share/man/man1
Also note how I use apt rather than apt-get and I only do a single run (which makes a simpler image by having only a single layer) and only a single apt update and chain the things I want to install into a single apt install rather than separate ones. This is just quicker and easier.
However, if you want a maven build box, perhaps you'd be better advised using one of the prebuilt maven images from docker hub that are themselves based on openjdk images. For java the underlying linux distro rarely matters and the openjdk images are pretty well respected:
from maven:3.6.3-jdk-11
run apt update && apt install -y curl unzip && apt clean
This bug does not occur if using a newer version of Docker (tested with 20.10). If using an older version of Docker, I recommend switching to a previous version of the ubuntu image. I tested ubuntu:20.10 with Docker 19.03 and it worked just fine. This is discussed here: https://bugs.launchpad.net/cloud-images/+bug/1928218
Update Docker version to the latest to solve this issue.
For ubuntu users follow these steps:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
For others please refer this link: https://docs.docker.com/engine/install/
I ran into this problem when I was running the Ubuntu 21.04 image under Rootless Docker, but the apt-get update command worked fine under the system Docker (invoked via sudo). Since my need was just for a manual test of an environment setup script, I just ran under the system Docker but, depending on your application, that might not be secure.
Substituting apt-get with apt has worked for me.

Prevent nvidia-docker from installing nvidia drivers with debian package

I am trying to create an nvidia-docker image with installed TensorRT for my specific application. I can't use any of the provided TensortRT base images, as they are using CUDA version not compatible with the application, but I have a custom TensorRT debian package which is used in my organization. The problem is, when I install it from the Dockerfile, it also installs nvidia drivers. As a result, the container is successfully created, but can't be started - the result is:
svc_moma_usr#PL1LXD-529389:~/gutkowsp/Docker_projects/test_cuda$ nvidia-docker run tensorrt-test
docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/97f449ff2535b1ad304520dae75c613931888658a66b89235b0d040a872a625c/merged/usr/bin/nvidia-smi: file exists\\\\n\\\"\"": unknown.
ERRO[0001] error waiting for container: context canceled
The dockerfile is:
FROM nvidia/cuda:9.1-devel-ubuntu16.04
ENV DEBIAN_FRONTEND noninteractive
ENV CUDNN_VERSION 7.0.5.15
LABEL com.nvidia.cudnn.version="${CUDNN_VERSION}"
RUN apt update -y && \
apt install software-properties-common -y && \
apt-add-repository --yes --update ppa:ansible/ansible && \
apt install ansible -y
RUN apt update -y && \
apt install -y --no-install-recommends \
libcudnn7=$CUDNN_VERSION-1+cuda9.1 \
libcudnn7-dev=$CUDNN_VERSION-1+cuda9.1
RUN apt update -y && \
apt install tensorrt -y
How this problem of unnecessary drivers is solved? This seems to me like a common issue, as in general nvidia docker images typically have installed nvidia software, which usually comes with drivers. Maybe someone can share the dockerfiles for the TensorRT images for reference?
For anyone who facing the same issue:
If necessary use CUDNN enabled docker image, like 11.7.1-cudnn8-runtime-ubuntu18.04 to avoid the necessity to install it using apt
Run apt update
Run apt install <your package> -y --dry-run | grep nvidia
Add all listed nvidia packages to apt ignore list - add a dash after the package name with an asterisk in place of version number
apt install <your package> libnvidia-compute-*-server- \
libnvidia-compute-*- --dry-run | grep nvidia
Make sure that none of nvidia packages will be installed. If necessary add newly discovered packages to ignore list.
If everything is OK then remove --dry-run flag and install your package
apt install <your package> libnvidia-compute-*-server- libnvidia-compute-*-

How to create the smallest possible Docker image after installing apt dependencies

I've created a Docker image using debian as the parent image. In my Dockerfile I've installed some dependencies using apt and pip.
Now, I want to get rid off everything that is not completely necessary to run my app, which of course, needs the dependencies installed.
For now I have the following lines in my Dockerfile after installing the dependencies.
RUN rm -rf /var/lib/apt/lists/* \
&& rm -Rf /usr/share/doc && rm -Rf /usr/share/man \
&& apt-get clean
I've also installed the dependencies using the --no-install-recommends option.
Anything else I can do to reduce the footprint of my Docker image?
PS: just in case, this is how I installed the dependencies:
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
sudo systemd \
build-essential libffi-dev libssl-dev \
python-pip python-dev python-setuptools python-wheel
To reduce the size of the image, you need to combine your RUN commands into one. When you create files in one layer and delete them in another, the files still exist on the drive and are shipped over the network. Their existence is just hidden when the layers of the filesystem are assembled for your container.
The Dockerfile best practices explain this in more detail: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run
I'd also recommend building with docker build --rm=false --no-cache . (temporarily) and then reviewing the output of docker diff on each of the created images to see what files are created in each layer.

Docker /dev/mapper permission

All new to docker, i'm trying to build an image of a software, during the
RUN apt-get install -y xxx command i'm encourtering issues :
Setting up lvm2 (2.02.95-8) ...
Setting up LVM Volume Groups... /dev/mapper/control: open failed:Operation not permitted
Failure to communicate with kernel device-mapper driver.
Check that device-mapper is available in the kernel.
No volume groups found
/dev/mapper/control: open failed: Operation not permitted
Failure to communicate with kernel device-mapper driver.
Check that device-mapper is available in the kernel.
No volume groups found
what could cause this issue ?
my distrib is a Debian7, maybe should i try this on a more recent distros ?
here is the Dockerfile :
#installation d'hynesim
FROM debian:wheezy
RUN echo $(whoami)
RUN echo "exit 0" > /usr/sbin/policy-rc.d
RUN apt-get update && apt-get install -y curl
RUN echo 'deb [arch=amd64] http://repository.hynesim.org/debian wheezy 2.2 backports' >> /etc/apt/sources.list && \
echo 'deb-src [arch=amd64] http://repository.hynesim.org/debian wheezy 2.2 backports' >> /etc/apt/sources.list
RUN curl -o - https://repository.hynesim.org/debian/hynesim.asc | apt-key add - && apt-get update && apt-get install -y \
hynesim-node \
hynesim-glacier

Resources