Permission denied with gcsfuse in unprivileged Ubuntu-based Docker container - docker

I was not able to run gcsfuse in my Ubuntu-based Docker image with --cap-add SYS_ADMIN --device /dev/fuse, as seen in other posts.
It works like a charm with --privileged though, and with root or non-root user. But I would like to avoid this option.
My Dockerfile:
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install -y gnupg lsb-release wget
RUN lsb_release -c -s > /tmp/lsb_release
RUN GCSFUSE_REPO=$(cat /tmp/lsb_release); echo "deb http://packages.cloud.google.com/apt gcsfuse-$GCSFUSE_REPO main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN wget -O - https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update
RUN apt-get install -y gcsfuse
My test:
docker run -it --rm \
--device /dev/fuse \
--cap-add SYS_ADMIN \
-v /path/in/host/to/key.json:/path/to/key.json \
-e GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json my_image:0.1 /bin/bash
In the running container:
mkdir /root/gruik
gcsfuse bucket_name /root/gruik/
The result:
Using mount point: /root/gruik
Opening GCS connection...
Mounting file system...
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: permission denied
Am I missing something? Thanks

This is actually an issue in docker itself and you need to run your docker container in --privileged mode to achieve this functionality. Check this related docker issue

Related

mkdir: cannot create directory ‘cpuset’: Read-only file system when running a "service docker start" in Dockerfile

I have a Dockerfile that extends the Apache Airflow 2.5.1 base image. What I want to do is be able to use docker inside my airflow containers (i.e. docker-in-docker) for testing and evaluation purposes.
My docker-compose.yaml has the following mount:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
My Dockerfile looks as follows:
FROM apache/airflow:2.5.1
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker airflow
RUN service docker start
USER airflow
Basically:
Install docker.
Add the airflow user to the docker group.
Start the docker service.
Continue as airflow.
Unfortunately, this does not work. During RUN service docker start, I encounter the following error:
Step 11/12 : RUN service docker start
---> Running in 77e9b044bcea
mkdir: cannot create directory ‘cpuset’: Read-only file system
I have another Dockerfile for building a local jenkins image, which looks as follows:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker jenkins
RUN service docker start
USER jenkins
I.e. it is exactly the same, except that I am using the jenkins user. Building this image works.
I have not set any extraneous permission on my /var/run/docker.sock:
$ ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Jan 18 17:14 /var/run/docker.sock
My questions are:
Why does RUN service start docker not work when building my airflow image?
Why does the exact same command in my jenkins Dockerfile work?
I've tried most of the answers to similar questions, e.g. here and here, but they have unfortunately not helped.
I'd rather try to avoid the chmod 777 /var/run/docker.sock solution if at all possible, and it should be since my jenkins image can build correctly...
Just delete the RUN service start docker line.
The docker CLI tool needs to connect to a Docker daemon, which it normally does through the /var/run/docker.sock Unix socket file. Bind-mounting the socket into the container is enough to make the host's Docker daemon accessible; you do not need to separately start Docker in the container.
There are several issues with the RUN service ... line specifically. Docker has a kind of complex setup internally, and some of the things it does aren't normally allowed in a container; that's probably related to the "cannot create directory" error. In any case, a Docker image doesn't persist running processes, so if you were able to start Docker inside the build, it wouldn't still be running when the container eventually ran.
More conceptually, a container doesn't "run services", it is a wrapper around only a single process (and its children). Commands like service or systemctl often won't work the way you expect, and I'd generally avoid them in a Docker context.

Dockerfile from Jenkins JDK 11 with Docker engine installed - Cannot connect to the Docker daemon

Ive created a Dockerfile that is based off jenkins/jenkins:lts-jdk11
Im trying to install docker + docker compose so that jenkins will have access to this when i create my pipeline for CD/CI.
Here is my Dockerfile:
FROM jenkins/jenkins:lts-jdk11 AS jenkins
WORKDIR /home/jenkins
RUN chown -R 1000:1000 /var/jenkins_home
USER root
# Install aws cli version 2
RUN apt-get update && apt-get install -y unzip curl vim bash sudo
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
#Install docker cli command
RUN sudo apt-get update
RUN sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
RUN echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN sudo apt-get update
RUN sudo apt-get install -y docker-ce docker-ce-cli containerd.io
##Install docker compose
RUN mkdir -p /usr/local/lib/docker/cli-plugins
RUN curl -SL https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 -o /usr/local/lib/docker/cli-plugins/docker-compose
RUN chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
RUN sudo usermod -a -G docker jenkins
The docker commands work well within the container but as soon as i start to build an image it displays this error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
If i try to start the docker service with service docker start i get the following error:
mkdir: cannot create directory ‘cpuset’: Read-only file system
Im not sure how to solve this one.
TIA
container does not use an init system. The Docker service cannot be started because of this.

Got permission denied while trying to connect to the Docker daemon socket: without chmod

This question is related to this but I am trying to avoid solutions which make use of chmod. I can't change the permissions of /var/run/docker.sock inside the Dockerfile because it is a volume and I am looking to not have to manually interfere with the environment. I am running on MacOS.
I have a Dockerfile which installs the docker engine into a debian based container, and adds a user xyz to the group docker.
FROM debian
USER root
# https://docs.docker.com/engine/install/debian/
RUN apt-get update
RUN apt-get --yes install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | \
gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
RUN echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get --yes install docker-ce docker-ce-cli containerd.io
RUN useradd xyz
RUN usermod -a -G docker xyz
RUN newgrp docker
USER xyz
This is my docker-compose.yml:
services:
my_service:
build:
context: .
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: tail -f /dev/null
The user xyz gets created and gets added to the docker group which according to Docker's instructions here should be enough to allow the user xyz access to the docker socket but I still find permission issues.
> docker compose exec my_service whoami
xyz
> docker compose exec my_service groups
xyz docker
> docker compose exec my_service docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
Hopefully this is reproducible for others - it would be good to know whether others experience the same issue.

Dockerfile run locally has google cloud SDK, same Dockerfile in production does not

Our Dockerfile has the following lines:
# Installing google cloud SDK for gsutil
RUN apt-get update && \
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
When we launch a docker container locally from this image, and docker exec -it containerID bash into the container, we get:
airflow#containerID:~$ gsutil --version
gsutil version: 4.65
When we launch a docker container on our GCP compute engine from this image, and docker exec -it containerID bash into the container, we get:
airflow#containerID:~$ gsutil --version
bash: gsutil: command not found
I thought the whole point of docker and dockerfiles was so that we could avoid this exact issue of something working locally but not in production... We're at a loss for how to even debug this?

How to run docker command in this Airflow docker container?

I have set up Airflow with docker-compose as described here.
https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
And there is a airflow task that have to execute docker command like
BashOperator(
task_id='my',
bash_command="""
docker run ..............
""",
dag=dag,
)
This means Docker package required in the airflow docker image, but there is not.
So, I tried to build my own airflow image based docker installed image. like,
FROM apache/airflow:2.0.1 as airflow-image
SHELL ["/bin/bash", "-o", "pipefail", "-e", "-u", "-x", "-c"]
USER root
RUN apt-get update && apt-get --assume-yes install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
RUN echo $(lsb_release -cs)
RUN echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get --assume-yes install docker-ce docker-ce-cli containerd.io
With an image of above Dockerfile, docker is installed like,
$ whereis docker
docker: /usr/bin/docker /usr/libexec/docker
$ whereis dockerd
dockerd: /usr/bin/dockerd
But I can not make docker daemon started so that task of airflow can execute docker run command.
After I see this ENTRYPOINT and tested several things,
https://github.com/apache/airflow/blob/master/Dockerfile#L535
ENTRYPOINT ["/usr/bin/dumb-init", "--", "/entrypoint"]
I wonder whether I can start docker daemon or not.
Any advice?
Can I make an docker image which is based airflow:2.0.1 and support docker commans?
If I understand you correctly you are trying to run a custom Docker image as a task in Airflow.
There is a DockerOperator available, see an example here.
I think what you tried to do is to host Docker inside of a Docker container. This is not necessary, since the Airflow Docker container should have access to Docker running on your machine.
So if you run Airflow 2.0 make sure to install this Python package apache-airflow-backport-providers-docker in your Airflow Docker container. And include this in your Python DAG file: from airflow.providers.docker.operators.docker import DockerOperator.
Then use the operator, something like this:
dockertask = DockerOperator(
task_id='docker_command',
image='centos:latest',
api_version='auto',
auto_remove=True,
command="/bin/sleep 30",
docker_url="unix://var/run/docker.sock",
network_mode="bridge"
)

Resources