How to access docker compose mounted volume from inside the container - docker

I am mounting a local file inside a docker container through the docker-compose.yml file:
version: '3'
services:
myapp:
build:
context: ./dockerfiles
dockerfile: myapp.Dockerfile
args:
- UID=1000
- GID=1000
network_mode: host
volumes:
- ./volumes/logs:/opt/myapp/logs
The mounted folder belongs to my user(uid: 1000, gid:1000) and these are the ids that the docker user gets, but the docker user cannot write to the mounted folder (permission denied).
Dockerfile:
FROM centos:7
ARG UID=1000
ARG GID=1000
RUN yum -y update && \
yum -y install epel-release && \
yum -y install passwd curl jq supervisor iputils openssl-devel
RUN yum -y clean all
RUN useradd -m -s /bin/sh user && \
passwd -d user && \
usermod -o -u ${UID} user && \
groupmod -o -g ${GID} user
VOLUME ["/opt/myapp/logs"]
ADD myapp /opt/myapp/app
ADD supervisor/services.ini /etc/supervisord.d/services.ini
ADD start.sh /
RUN chown -R user:user /opt/myapp
RUN chown user:user /start.sh
USER user
CMD ["/start.sh"]
start.sh
exec supervisord -c /etc/supervisord.conf -n
service.ini
[program:myapp]
user = user
autorestart = true
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes = 0
command = /opt/myapp/app
My app is run with user and it cannot write it's logs in the mounted folder.
My goal is to access the logs from outside the docker container.
Even if I run with the root user inside the container, I still cannot access the mounted folder!

Related

Permission Denied on "docker compose exec --user alice app /bin/bash"

In my Dockerfile, I create a user, "alice" (generic name that isn't industry-specific) with a home directory. Alice has /home/.bashrc and root has /root/.bashrc (which all users can read - chmod a+r /root/bashrc).
I can run docker compose exec app /bin/bash and access the app container's command line as the root user. I can then su alice and have full access to the container as Alice.
However, if I run docker compose exec --user alice app /bin/bash, I get "bash: /root/.bashrc: Permission denied" followed by the "alice#sha:working/directory" cli prompt. ls ~/ gives the error: "ls: cannot open directory '/root/': Permission denied".
My docker-compose.yml file (abridged):
services:
app:
build:
context: ./docker/app
dockerfile: Dockerfile
args:
- HOST_GID=${HOST_GID}
- HOST_UID=${HOST_UID}
volumes:
${full_source_path}:/var/www/html
...
env_file: .env
My Dockerfile (abridged):
FROM --platform=$BUILDPLATFORM php:7.1-apache
# Set up Apache
RUN a2enmod rewrite
# UID & GID are passed in to use the same UID/GID as the host user's user account
ARG HOST_UID
ARG HOST_GID
RUN echo "Creating alice" && \
groupadd \
--force \
--gid ${HOST_GID} \
alice && \
sync && \
useradd \
--no-log-init \
--uid ${HOST_UID} \
--gid ${HOST_GID} \
--create-home \
--shell /bin/bash \
alice \
&& \
sync && \
echo "DONE"
# copy externally created files, including .bashrc, into /home/alice/
...
RUN chmod a+r /root/.bashrc
RUN chmod a+r /home/alice/.bashrc
RUN echo "Finalizing" && \
chown -R alice:alice /home/alice/ && \
echo "DONE"
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Persist cached writes
RUN sync
WORKDIR /var/www/html
ENTRYPOINT ["/entrypoint.sh"]
My entrypoint.sh file:
#!/usr/bin/env bash
set -e
echo "Starting Apache"
exec apache2-foreground
echo "Container Ready"
sleep infinity
My host:
MacOS 12.4
Docker Desktop 4.10.1

Docker Alpine, Celery (worker and beat) fail with PermissionError when using non-root user

I'm trying to run a Flask app with Celery (worker + beat) on Docker Alpine using docker-compose.
I want it to run with a non-root user celery in my Docker container.
The flask app is building ok and works, but my celery containers are failing with this error:
File "/usr/lib/python3.6/site-packages/celery/platforms.py", line 543, in maybe_drop_privileges
_setuid(uid, gid)
File "/usr/lib/python3.6/site-packages/celery/platforms.py", line 564, in _setuid
initgroups(uid, gid)
File "/usr/lib/python3.6/site-packages/celery/platforms.py", line 507, in initgroups
return os.initgroups(username, gid)
PermissionError: [Errno 1] Operation not permitted
My Dockerfile:
I tried to add RUN chown celery:celery /etc/group thinking that was the issue but it's still failing
FROM alpine:3.8
RUN apk update && \
apk add build-base python3 python3-dev libffi-dev libressl-dev && \
cd /usr/bin && \
ln -sf python3 python && \
ln -sf pip3 pip && \
pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN addgroup celery
RUN adduser celery -G celery -s /bin/sh -D
RUN mkdir -p /var/log/celery/ && chown celery:celery /var/log/celery/
RUN mkdir -p /var/run/celery/ && chown celery:celery /var/run/celery/
RUN chown celery:celery /etc/group # added to try fixing the issue
USER celery
ENV FLASK_APP=flask_app
WORKDIR app/
COPY flask_app flask_app
My docker-compose:
(...)
celeryworker:
build: .
command: celery -A flask_app.tasks worker --loglevel=INFO --uid=celery --pidfile=/tmp/celeryworker-shhh.pid
celerybeat:
build: .
command: celery -A flask_app.tasks beat --loglevel=INFO --uid=celery --pidfile=/tmp/celerybeat-shhh.pid
You should do like that
RUN mkdir -p /var/log/celery/ /var/run/celery/
RUN useradd -G root celery && \
chgrp -Rf root /var/log/celery/ /var/run/celery/ && \
chmod -Rf g+w /var/log/celery/ /var/run/celery/c && \
chmod g+w /etc/passwd
...
RUN chmod a+x /start.sh
USER celery
ENTRYPOINT ["/start.sh"]
You should create user celery firsts. Then, add this user into group root. After that you need set write permission for this folder you need to put logs and /etc/passwd.
You also need to have one script to add your user into /etc/passwd
#!/bin/bash
#
if [ `id -u` -ge 10000 ]; then
echo "celery:x:`id -u`:`id -g`:,,,:/home/web:/bin/bash" >> /etc/passwd
fi
Both answers from #Shashank V and #Kine were really relevant and helpful but still had some issues afterward.
After doing some research, I finally made it works with the following configuration
Dockerfile
FROM alpine:3.11.0
RUN apk update && \
apk add build-base python3 python3-dev libffi-dev libressl-dev && \
ln -sf /usr/bin/python3 /usr/bin/python && \
ln -sf /usr/bin/pip3 usr/bin/pip && \
pip install --upgrade pip
RUN mkdir -p /var/log/celery/ /var/run/celery/
RUN addgroup app && \
adduser --disabled-password --gecos "" --ingroup app --no-create-home app && \
chown app:app /var/run/celery/ && \
chown app:app /var/log/celery/
USER app
ENV PATH="/home/app/.local/bin:${PATH}"
WORKDIR app/
COPY requirements.txt .
RUN pip install --user -r requirements.txt\
COPY flask_app flask_app
ENV FLASK_APP=flask_app
docker-compose
(...)
celeryworker:
build: .
command: >
celery -A shhh.tasks worker
--loglevel=INFO
--logfile=/var/log/celery/celeryworker-shhh.log
--pidfile=/var/run/celery/celeryworker-shhh.pid
celerybeat:
build: .
command: >
celery -A shhh.tasks beat
--loglevel=INFO
--logfile=/var/log/celery/celerybeat-shhh.log
--pidfile=/var/run/celery/celerybeat-shhh.pid
--schedule=/var/run/celery/celerybeat-schedule # specify schedule db in a loc where app has read/write access
You have to be root user if you want to use --uid or --gid argument. Try removing these arguments.

Cannot copy files from docker to tmp of host using docker-compose

docker -v:
Docker version 1.12.1, build 23cf638
docker-compose.yml:
version: "2"
services:
test-docker:
build: ./test-docker
volumes:
- /tmp:/tmp
command: /bin/bash -c "mkdir -p /my && mkdir -p /tmp/my \
&& echo 'tmp:' && ls /tmp && echo 'code:' && ls /my \
&& cp -r /my/nLWjfTg9 /tmp/my/nLWjfTg9 \
&& cp -r /my/WzzrKGqe /tmp/my/WzzrKGqe"
Dockerfile:
FROM ubuntu:16.04 ENV TERM xterm
ENV DEBIAN_FRONTEND noninteractive
ADD http://pastebin.com/raw/nLWjfTg9 /my
ADD http://pastebin.com/WzzrKGqe /my
docker-compose up:
test-docker_1 | mkdir: cannot create directory '/my': File exists
tmp on host not create.
The ADD instruction in your Dockerfile creates the /my directory in the image.
In your command you run mkdir -p /my - which already exists, so you get your error and the command ends before copying any files.

Run docker with jenkins user inside jenkins container on Centos7

I try to run Docker inside my Jenkins slave container on Centos7.1.
This are the steps I performed in my dockerfile:
FROM java:8
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN groupadd -g 983 docker \
&& gpasswd -a ${user} docker
So I have a user jenkins (id1000) in a group jenkins (gid1000) + in a group docker (gid983). Why did I chose gid 983?
Well if I check /etc/group on my host I see:
docker:x:983:centos
In my docker-compose script I'm mounting my docker socket so that's why I used the same gid as on my host.
Part of docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
When I exec inside my container as root:
root#c4af16c386d7:/var/jenkins_home# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
jenkins-slave 1.0 94a5d6606f86 10 minutes
jenkins 2.7.1 b4974ba62598 3 weeks ago 741 MB
java 8-jdk 264282a59a95 7 weeks ago 669.2 MB
But as jenkins user:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
In my container:
cat /etc/passwd
jenkins:x:1000:1000::/var/jenkins_home:/bin/bash
cat /etc/group
jenkins:x:1000:
docker:x:983:jenkins
Addition:
$ docker exec -it ec52d4125a02 bash
root#ec52d4125a02:/var/jenkins_home# whoami
root
root#ec52d4125a02:/var/jenkins_home# su jenkins
jenkins#ec52d4125a02:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a23521523249 jenkins:2.7.1 "/bin/tini -- /usr/lo" 20 minutes ago Up 20 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:32777->22/tcp, 0.0.0.0:32776->50000/tcp jenkins-master
ec52d4125a02 jenkins-slave:1.0 "setup-sshd" 20 minutes ago Up 20 minutes 0.0.0.0:32775->22/tcp, 0.0.0.0:32774->8080/tcp, 0.0.0.0:32773->50000/tcp jenkins-slave
but:
$ docker exec -it -u jenkins ec52d4125a02 bash
jenkins#ec52d4125a02:~$ docker ps
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
In the first case my jenkins user:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins),983(docker)
In the second case:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
First, why do you need to spin containers from inside another with Jenkins? Here's why this is not a good idea.
Having that said and you still want to go ahead. First thing is that there are several steps you need to take to run Docker inside a Docker container. For example, have you started this container in --priviledged mode?
You should try using Jerome Petazzoni's Docker in Docker as it does everything you need.
You can then combine DInD's stuff with a Jenkins installation. Here's an example that I've put together by mashing up Jerome's DInD with other things and assembling a docker container that has Jenkins, Docker Compose and other useful stuff:
Dockerfile:
FROM ubuntu:xenial
ENV UBUNTU_FLAVOR xenial
#== Ubuntu flavors - common
RUN echo "deb http://archive.ubuntu.com/ubuntu ${UBUNTU_FLAVOR} main universe\n" > /etc/apt/sources.list \
&& echo "deb http://archive.ubuntu.com/ubuntu ${UBUNTU_FLAVOR}-updates main universe\n" >> /etc/apt/sources.list
MAINTAINER Rogério Peixoto
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# useful stuff.
RUN apt-get update -q && apt-get install -qy \
apt-transport-https \
ca-certificates \
curl \
lxc \
supervisor \
zip \
git \
iptables \
locales \
nano \
make \
openssh-client \
openjdk-8-jdk-headless \
&& rm -rf /var/lib/apt/lists/*
# Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
# Install the wrapper script from https://raw.githubusercontent.com/docker/docker/master/hack/dind.
ADD ./wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
# Define additional metadata for our image.
VOLUME /var/lib/docker
ENV JENKINS_VERSION 2.8
ENV JENKINS_SHA 4d83a40319ecf4eaab2344a18c197bd693080530
RUN mkdir -p /usr/share/jenkins/ \
&& curl -SL http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war -o /usr/share/jenkins/jenkins.war
# RUN echo "$JENKINS_SHA /usr/share/jenkins/jenkins.war" | sha1sum -c -
ENV JENKINS_UC https://updates.jenkins.io
RUN mkdir -p /usr/share/jenkins/ref \
&& chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
RUN usermod -a -G docker jenkins
ENV DOCKER_COMPOSE_VERSION 1.8.0-rc1
# Install Docker Compose
RUN curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN apt-get install -y python-pip && pip install supervisor-stdout
EXPOSE 8080
EXPOSE 50000
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:docker]
priority=10
command=wrapdocker
startsecs=0
exitcodes=0,1
[program:chown]
priority=20
command=chown -R jenkins:jenkins /var/jenkins_home
startsecs=0
[program:jenkins]
priority=30
user=jenkins
environment=JENKINS_HOME="/var/jenkins_home",HOME="/var/jenkins_home",USER="jenkins"
command=java -jar /usr/share/jenkins/jenkins.war
stdout_events_enabled = true
stderr_events_enabled = true
[eventlistener:stdout]
command=supervisor_stdout
buffer_size=100
events=PROCESS_LOG
result_handler=supervisor_stdout:event_handler
You can get wrapdocker file here
Put all that in the same directory and build it:
docker build -t my_dind_jenkins .
Then run it:
docker run -d --privileged \
--name=master-jenkins \
-p 8080:8080 \
-p 50000:50000 my_dind_jenkins

Is s3fs not able to mount inside docker container?

I want to mount s3fs inside of docker container.
I made docker image with s3fs, and did like this:
host$ docker run -it --rm docker/s3fs bash
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
fuse: failed to open /dev/fuse: Operation not permitted
Showing "Operation not permitted" error.
So I googled, and did like this (adding --privileged=true) again:
host$ docker run -it --rm --privileged=true docker/s3fs bash
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root#container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected
[ root#container:~ ]$ fusermount -u /mnt/s3bucket
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root#container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected
Then, mounting not shows error, but if run ls command, "Transport endpoint is not connected" error is occured.
How can I mount s3fs inside of docker container?
Is it impossible?
[UPDATED]
Add Dockerfile configuration.
Dockerfile:
FROM dockerfile/ubuntu
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y libfuse-dev
RUN apt-get install -y fuse
RUN apt-get install -y libcurl4-openssl-dev
RUN apt-get install -y libxml2-dev
RUN apt-get install -y mime-support
RUN \
cd /usr/src && \
wget http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz && \
tar xvzf s3fs-1.74.tar.gz && \
cd s3fs-1.74/ && \
./configure --prefix=/usr && \
make && make install
ADD passwd/passwd-s3fs /etc/passwd-s3fs
ADD rules.d/99-fuse.rules /etc/udev/rules.d/99-fuse.rules
RUN chmod 640 /etc/passwd-s3fs
RUN mkdir /mnt/s3bucket
rules.d/99-fuse.rules:
KERNEL==fuse, MODE=0777
I'm not sure what you did that did not work, but I was able to get this to work like this:
Dockerfile:
FROM ubuntu:12.04
RUN apt-get update -qq
RUN apt-get install -y build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support automake libtool wget tar
RUN wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.77.tar.gz -O /usr/src/v1.77.tar.gz
RUN tar xvz -C /usr/src -f /usr/src/v1.77.tar.gz
RUN cd /usr/src/s3fs-fuse-1.77 && ./autogen.sh && ./configure --prefix=/usr && make && make install
RUN mkdir /s3bucket
After building with:
docker build --rm -t ubuntu/s3fs:latest .
I ran the container with:
docker run -it -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged ubuntu/s3fs:latest bash
and then inside the container:
root#efa2689dca96:/# s3fs s3bucket /s3bucket
root#efa2689dca96:/# ls /s3bucket
testing.this.out work.please working
root#efa2689dca96:/#
which successfully listed the files in my s3bucket.
You do need to make sure the kernel on your host machine supports fuse, but it would seem you have already done so?
Note: Your S3 mountpoint will not show/work from inside other containers when using Docker's --volume or --volumes-from directives. For example:
docker run -t --detach --name testmount -v /s3bucket -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged --entrypoint /usr/bin/s3fs ubuntu/s3fs:latest -f s3bucket /s3bucket
docker run -it --volumes-from testmount --entrypoint /bin/ls ubuntu:12.04 -ahl /s3bucket
total 8.0K
drwxr-xr-x 2 root root 4.0K Aug 21 21:32 .
drwxr-xr-x 51 root root 4.0K Aug 21 21:33 ..
returns no files even though there are files in the bucket.
Adding another solution.
Dockerfile:
FROM ubuntu:16.04
# Update and install packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update --fix-missing && \
apt-get install -y automake autotools-dev g++ git libcurl4-gnutls-dev wget libfuse-dev libssl-dev libxml2-dev make pkg-config
# Clone and run s3fs-fuse
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse.git /tmp/s3fs-fuse && \
cd /tmp/s3fs-fuse && ./autogen.sh && ./configure && make && make install && ldconfig && /usr/local/bin/s3fs --version
# Remove packages
RUN DEBIAN_FRONTEND=noninteractive apt-get purge -y wget automake autotools-dev g++ git make && \
apt-get -y autoremove --purge && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Set user and group
ENV USER='appuser'
ENV GROUP='appuser'
ENV UID='1000'
ENV GID='1000'
RUN groupadd -g $GID $GROUP && \
useradd -u $UID -g $GROUP -s /bin/sh -m $USER
# Install fuse
RUN apt-get update && \
apt install fuse && \
chown ${USER}.${GROUP} /usr/local/bin/s3fs
# Config fuse
RUN chmod a+r /etc/fuse.conf && \
perl -i -pe 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf
# Copy credentials
ENV SECRET_FILE_PATH=/home/${USER}/passwd-s3fs
COPY ./passwd-s3fs $SECRET_FILE_PATH
RUN chmod 600 $SECRET_FILE_PATH && \
chown ${USER}.${GROUP} $SECRET_FILE_PATH
# Switch to user
USER ${UID}:${GID}
# Create mnt point
ENV MNT_POINT_PATH=/home/${USER}/data
RUN mkdir -p $MNT_POINT_PATH && \
chmod g+w $MNT_POINT_PATH
# Execute
ENV S3_BUCKET = ''
WORKDIR /home/${USER}
CMD exec sleep 100000 && /usr/local/bin/s3fs $S3_BUCKET $MNT_POINT_PATH -o passwd_file=passwd-s3fs -o allow_other
docker-compose-yaml:
version: '3.8'
services:
s3fs:
privileged: true
image: <image-name:tag>
##Debug
#stdin_open: true # docker run -i
#tty: true # docker run -t
environment:
- S3_BUCKET=my-bucket-name
devices:
- "/dev/fuse"
cap_add:
- SYS_ADMIN
- DAC_READ_SEARCH
cap_drop:
- NET_ADMIN
Build image with docker build -t <image-name:tag> .
Run with: docker-compose -d up
If you would prefer to use docker-compose for testing on your localhost use the following. Note you don't need to use --privileged flag as we are passing --cap-add SYS_ADMIN --device /dev/fuse flags in the docker-compose.yml
create file .env
AWS_ACCESS_KEY_ID=xxxxxx
AWS_SECRET_ACCESS_KEY=xxxxxx
AWS_BUCKET_NAME=xxxxxx
create file docker-compose.yml
version: "3"
services:
s3-fuse:
image: debian-aws-s3-mount
restart: always
build:
context: .
dockerfile: Dockerfile
environment:
- AWSACCESSKEYID=${AWS_ACCESS_KEY_ID}
- AWSSECRETACCESSKEY=${AWS_SECRET_ACCESS_KEY}
- AWS_BUCKET_NAME=${AWS_BUCKET_NAME}
cap_add:
- SYS_ADMIN
devices:
- /dev/fuse
create file Dockerfile. i.e You can use any docker image you prefer but first, check if your distro is supported here
FROM node:16-bullseye
RUN apt-get update -qq
RUN apt-get install -y s3fs
RUN mkdir /s3_mnt
To run container execute:
$ docker-compose run --rm -t s3-fuse /bin/bash
Once inside the container. You can mount your s3 Bucket by running the command:
# s3fs ${AWS_BUCKET_NAME} s3_mnt/
Note: For this setup to work .env, Dockerfile and docker-compose.yml must be created in the same directory. Don't forget to update your .env file with the correct credentials to the s3 bucket.

Resources