Docker permission denied with volume - docker

I'm trying to start a Nginx container that serve static content located on the host, in /opt/content.
The container is started with :
docker run -p 8080:80 -v /opt/content:/usr/share/nginx/html nginx:alpine
And Nginx keeps giving me 403 Forbidden. Moreover, when trying to inspect the content of the directory, I got strange results :
$ $ docker exec -i -t inspiring_wing /bin/sh
/ # ls -l /usr/share/nginx/
total 4
drwxrwxrwx 3 root root 4096 Aug 15 08:08 html
/ # ls -l /usr/share/nginx/html/
ls: can't open '/usr/share/nginx/html/': Permission denied
total 0
I chmod -R 777 /opt/ to be sure there are no restriction on the host, but it doesn't change anything. I also try to add :ro flag to the volume option with no luck.
How can I make the mounted volume readable by the container ?
UPDATE : here are the full steps I done to reproduce this problem (as root, and with another directory to start from a clean config) :
mkdir /public
echo "Hello World" > /public/index.html
chmod -R 777 /public
docker run -p 8080:80 -d -v /public:/usr/share/nginx/html nginx:alpine
docker exec -i -t inspiring_wing /bin/sh
ls -l /usr/share/nginx/html
And this last command inside the container returns me : ls -l /usr/share/nginx/html. Of course, replace inspiring_wing by the name of the created container.

The problem was caused by SELinux that prevented Docker to access the file system.
If someone has the same problem than this post, here is how to check if it's the same situation :
1/ Check SELinux status: sestatus. If the mode is enforcing, it may block Docker to access filesystem.
# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 31
2/ Change mode to permissive: setenforce 0. There should be no more restrictions on Docker.

You're copying from /opt/content on the host, to /usr/share/nginx/html in the container. So when you log in, you want to look in /usr/share/nginx/html for the files.
If this doesn't help enough, you can paste the content of ls -lah /usr/share/nginx/html but I think you just don't have an index page in there.

Instead of setting SELinux to permissive on your host entirely, I would recommend setting the correct security context for your volume source:
chcon -R -t container_file_t PATHTOHOSTDIR

Related

docker-compose : Scaling containers with distinct host volume map

Here, I deployed 2 containers with --scale flag
docker-compose up -d --scale gitlab-runner=2
2.Two containers are being deployed with names scalecontainer_gitlab-runner_1 and scalecontainer_gitlab-runner_2 resp.
I want to map different volume for each container.
/srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
Getting this error:
WARNING: The DOCKER_SCALE_NUM variable is not set. Defaulting to a blank string.
Is there any way, I can map different volume for separate container .
services:
gitlab-runner:
image: "gitlab/gitlab-runner:latest"
restart: unless-stopped
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- /srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
version: "3.5"
I don't think you can, there's an open request on this here. Here I will try to describe an alternative method for getting what you want.
Try creating a symbolic link from within the container that links to the directory you want. You can determine the "number" of the container after it's constructed by reading the container name from docker API and taking the final segment. To do this you have to mount the docker socket into the container, which has big security implications.
Setup
Here is a simple script to get the number of the container (Credit Tony Guo).
get-name.sh
DOCKERINFO=$(curl -s --unix-socket /run/docker.sock http://docker/containers/$HOSTNAME/json)
ID=$(python3 -c "import sys, json; print(json.loads(sys.argv[1])[\"Name\"].split(\"_\")[-1])" "$DOCKERINFO")
echo "$ID"
Then we have a simple entrypoint file which gets the container number, creates the specific config directory if it doesn't exist, and links its specific config directory to a known location (/etc/config in this example).
entrypoint.sh
#!/bin/sh
# Get the number of this container
NAME=$(get-name)
CONFIG_DIR="/config/config_${NAME}"
# Create a config dir for this container if none exists
mkdir -p "$CONFIG_DIR"
# Create a sym link from a well known location to our individual config dir
ln -s "$CONFIG_DIR" /etc/config
exec "$#"
Next we have a Dockerfile to build our image, we need to set the entrypoint and install curl and python for it to work. Also copy in our get-name.sh script.
Dockerfile
FROM alpine
COPY entrypoint.sh entrypoint.sh
COPY get-name.sh /usr/bin/get-name
RUN apk update && \
apk add \
curl \
python3 \
&& \
chmod +x entrypoint.sh /usr/bin/get-name
ENTRYPOINT ["/entrypoint.sh"]
Last, a simple compose file that specifies our service. Note that the docker socket is mounted, as well as ./config which is where our different config directories go.
docker-compose.yml
version: '3'
services:
app:
build: .
command: tail -f
volumes:
- /run/docker.sock:/run/docker.sock:ro
- ./config:/config
Example
# Start the stack
$ docker-compose up -d --scale app=3
Starting volume-per-scaled-container_app_1 ... done
Starting volume-per-scaled-container_app_2 ... done
Creating volume-per-scaled-container_app_3 ... done
# Check config directory on our host, 3 new directories were created.
$ ls config/
config_1 config_2 config_3
# Check the /etc/config directory in container 1, see that it links to the config_1 directory
$ docker exec volume-per-scaled-container_app_1 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_1
# Container 2
$ docker exec volume-per-scaled-container_app_2 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_2
# Container 3
$ docker exec volume-per-scaled-container_app_3 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_3
Notes
I think gitlab/gitlab-runner has its own entrypoint file so you may need to chain them.
You'll need to adapt this example to your specific setup/locations.

How to obtain consistent behavior across different hosts for docker volumes permissions?

Kindly asking for help to make sense of what I'm doing over here. I'm trying to mount as read-only a folder and have the apache user being able to read it. I'm having different behaviors on different servers
I start a container with a Dockerfile as such, note the second volume being mounted as read-only
sudo docker build -f Dockerfile -t myimage .
sudo docker run -tid --name="mycontainer" -v /my_ro_folder:/var/mystuff:ro myimage )
My Dockerfile is as follows (summarized):
FROM centos:7
ENV container docker
RUN yum -y install ....;
RUN mkdir /var/mystuff
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/httpd","-D","FOREGROUND"]
Now, I see two different behaviors on two linux servers running the same distro and I don't understand why. I tried removing containers and purge the system thinking it was some cache but to no avail.
SERVER1:
ls -la /my_ro_folder
drwxrwxrwx+ 1 netadmin users 128 Jun 25 12:12 .
$ id -g netadmin ; id -u netadmin
100
1032
SERVER2:
ls -la /my_ro_folder
drwxrwxrwx+ 1 netadmin users 128 Jun 25 12:12 .
$ id -g netadmin ; id -u netadmin
100
1026
Now, on SERVER1, I get permissions just fine in the container:
drwxrwxrwx 1 1032 users 128 Jun 25 10:12 mystuff
While on SERVER2 I don't, they remain as such and consequently, the apache user can't read:
d--------- 1 1026 users 128 Jun 25 10:12 mystuff
On the containers there is no user 1026 or 1032 in neither of them. Both have the group users:x:100: though.
What is going on? why is there such a behavior .. and how can I get a consistent behavior?
Thanks
Anything that depends on bind-mounts from the host will intrinsically have host-specific behavior. On native-Linux systems, the host's user IDs will be visible inside the container, but on MacOS they'll get remapped to something else. The actual content will vary between hosts, and if you try to deploy the container to another system, you'll have to bring the content along with it.
You can avoid this by including the content in your image, and not using a bind-mount at all.
FROM centos:7
RUN yum -y install ....;
COPY mystuff /var/mystuff # creates the directory
CMD ["/usr/sbin/httpd","-D","FOREGROUND"]
# no need for a VOLUME declaration of any sort
sudo docker run -d --name="mycontainer" -p 80:80 myimage # no -v option
If you're used to having a live development environment, this isn't it. But if you're looking for a deployment setup, this means you can run the image as-is without copying other content over or worrying about filesystem permissions. If you give each image build a unique tag then you can very easily roll back to yesterday's build, and you can run a copy of the proposed container in a pre-production environment without worrying about whether you've correctly copied the files over.

Dockerfile and chown permissions

I am new with the docker and I am trying to do a simple setup, which the scope is:
create a home folder and give appropriate permissions
On host side:
I have a user called devel which I put into the docker group.
When I run 'groups devel' I get back the group docker. UID 1000 and GID 1000.
my subuid file:
devel:1000:1
devel:100000:65536
my subgid file:
devel:1000:1
devel:100000:65536
following a tutorial and setting the sysconfig file to start with that 'devel' as option for remapping.
I then created this Dockerfile:
USER root
RUN groupadd -g 1000 devel
#Create the user with home directory
RUN useradd -d /var/opt/devel -u 1000 -g 1000 --shell /bin/bash devel
#Just for being very-very-very-very sure:
RUN chown -vhR devel:devel /var/opt/devel
#test with ls
RUN ls -ltr /var/opt/
User deploy
#test again by creating a file:
RUN touch /var/opt/devel/TEST.txt
RUN ls -ltr /var/opt/devel/TEST.txt
USER root
RUN ls -ltr /var/opt
USER devel
CMD ["/bin/bash"]
The result is that the directory which is created has group "devel" but the user owner is always the root.
I have disabled after 12 hours checking why the SELINUX for another reason (the reason was that id did not let me to use chown at all) but now I am stack and I know what else magic do I need to do.
The docker version is 18.09.1-ol ( oracle linux 7)
Hope someone has an idea.
Thanks

Can not run metricbeat in docker

I am trying to run metricbeat using docker in windows machine and I have changed metricbeat.yml as per my requirement.
docker run -v /c/Users/someuser/docker/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml docker.elastic.co/beats/metricbeat:5.6.0
but getting these error
metricbeat2017/09/17 10:13:19.285547 beat.go:346: CRIT Exiting: error
loading config file: config file ("metricbeat.yml") can only be
writable by the owner but the permissions are "-rwxrwxrwx" (to fix the
permissions use: 'chmod go-w /usr/share/metricbeat/metricbeat.yml')
Exiting: error loading config file: config file ("metricbeat.yml") can only be writable by the owner but the permissions are "-rwxrwxrwx"
(to fix the permissions use: 'chmod go-w /
usr/share/metricbeat/metricbeat.yml')
Why I am getting this?
What is the right way to make permanent change in file content in docker container (As I don't want to change configuration file each time when container start)
Edit:
Container is not meant to be edited / changed.If necessary, docker volume management is available to externalize all configuration related works.Thanks
So there are 2 options you can do here I think.
The first is that you can ensure the file has the proper permissions:
chmod 644 metricbeat.yml
Or you can run your docker command with -strict.perms=false which flags that metricbeat shouldn't care about what permissions are on the metricbeat.yml file.
docker run \
docker.elastic.co/beats/metricbeat:5.6.0 \
--volume="/c/Users/someuser/docker/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml" \
-strict.perms=false
You can see more documentation about that flag in the link below:
https://www.elastic.co/guide/en/beats/metricbeat/current/command-line-options.html#global-flags

Understanding the difference in sequence of ENTRYPOINT/CMD between Dockerfile and docker run

Docker noob here...
I am trying to build and run an IBM DataPower container from a Dockerfile, but it doesn't seem to work the same as when just running docker run and passing the same parameters in the terminal.
This works (docker run)
docker run -it \
-v $PWD/config:/drouter/config \
-e DATAPOWER_ACCEPT_LICENSE=true \
-e DATAPOWER_INTERACTIVE=true \
-e DATAPOWER_WORKER_THREADS=4 \
-p 9090:9090 \
--name mydatapower \
ibmcom/datapower
... the key part being that it mounts the ./config folder and the custom configuration is picked up by datapower running in the container.
This doesn't (Dockerfile)
Dockerfile:
FROM ibmcom/datapower
ENV DATAPOWER_ACCEPT_LICENSE=true
ENV DATAPOWER_INTERACTIVE=true
ENV DATAPOWER_WORKER_THREADS=4
EXPOSE 9090
COPY config/auto-startup.cfg /drouter/config/auto-startup.cfg
Build:
docker build -t local/datapower .
Run:
docker run -it \
-p 9090:9090 \
--name mydatapower local/datapower
The problem is that DataPower doesn't pick up the auto-startup.cfg file, so the additional config options doesn't get used. I know the source file path is correct because if I misspell the file name docker throws an error.
I have a theory that it might be running the inherited ENTRYPOINT or CMD before the config file is available. I don't know how to test or prove this. I don't know what the ENTRYPOINT or CMD is because the inherited image is not open source and I can't figure out how to find it.
Does that seem likely?
UPDATE:
The content of the auto-startup.cfg is:
top; co
ssh
web-mgmt
admin enabled
port 9090
exit
It simply enables the DataPower WebGUI.
The output when running it in the commandline with:
docker run -it -v $PWD/config:/drouter/config -v $PWD/local:/drouter/local -e DATAPOWER_ACCEPT_LICENSE=true -e DATAPOWER_INTERACTIVE=true -e DATAPOWER_WORKER_THREADS=4 -p 9091:9090 --name myconfigureddatapower ibmcom/datapower`
...contains this:
20170908T121729.015Z [0x8100006e][system][notice] : Executing startup configuration.
20170908T121729.970Z [0x00350014][mgmt][notice] web-mgmt(WebGUI-Settings): tid(303): Operational state up
...but with Dockerfile it doesn't. That's why I think the config files may be copied into place too late.
I've tried adding CMD ["/bin/drouter"] to the end of my Dockerfile to no avail.
I have tested your Dockerfile and it seems to be working. My auto-startup.cfg file is copied in the proper location and when I launch the container it's reading the file.
I get this output:
[root#ip-172-30-2-164 tmp]# docker run -ti -p 9090:9090 test
20170908T123728.818Z [0x8040006b][system][notice] logging target(default-log): Logging started.
20170908T123729.067Z [0x804000fe][system][notice] : Container instance UUID: 36bcca0e-6139-4694-91b0-2b7b66c3a498, Cores: 4, vCPUs: 4, CPU model: Intel(R) Xeon(R) CPU E5-2676 v3 # 2.40GHz, Memory: 16049.1MB, Platform: docker, OS: dpos, Edition: developers-limited, Up time: 0 minutes
20170908T123729.071Z [0x8040001c][system][notice] : DataPower IDG is on-line.
20170908T123729.071Z [0x8100006f][system][notice] : Executing default startup configuration.
20170908T123729.416Z [0x8100006d][system][notice] : Executing system configuration.
20170908T123729.417Z [0x8100006b][mgmt][notice] domain(default): tid(8143): Domain operational state is up.
708f98be1390
Unauthorized access prohibited.
20170908T123731.239Z [0x806000dd][system][notice] cert-monitor(Certificate Monitor): tid(399): Enabling Certificate Monitor to scan once every 1 days for soon to expire certificates
20170908T123731.552Z [0x8100006e][system][notice] : Executing startup configuration.
20170908T123732.436Z [0x8100003b][mgmt][notice] domain(default): Domain configured successfully.
20170908T123732.449Z [0x00350014][mgmt][notice] web-mgmt(WebGUI-Settings): tid(303): Operational state up
login:
To check that your file has been copied to the container you can run docker run -ti local/datapower sh to enter the container and then check the content of /drouter/config/.
Your base image command is: CMD ["/bin/drouter"] you can check it running docker history ibmcom/datapower.
UPDATE:
The drouter user in the container must be able to read the auto-startup.cfg file. You have 2 options:
set your local auto-startup.cfg with the proper permissions (chmod 644 config/autostart.cfg).
or add these line in the Dockerfile so drouter can read the file:
USER root
RUN chown drouter /drouter/config/auto-startup.cfg
USER drouter

Resources