Volume mount not writing out to host machine - docker

I'm trying to set up my docker container with a host machine directory mounted to the container for writing logs out to. My dockerfile is pretty simple:
FROM golang:1.9.2
WORKDIR /
COPY daemon /
CMD ["./daemon", "-d", "debug"]
Where daemon is the compiled binary of a Go program. I'm starting the container via the following command:
docker run -d --restart unless-stopped -v /var/log/scope/:/var/log/scope/ slack_daemon
Where slack_daemon is the name of the image the above dockerfile is built into.
However, the directory /var/log/scope in the container is not being written out to the host machine. Anything I place in that directory on the host machine is visible inside the container, but neither writing to existing files or creating new files in that directory in the container is writing out to the host machine. No error is being thrown by docker or any of the commands, it just simply isn't writing anything out to the host directory.
Running Docker 17.09.0-ce-mac35, stable channel, on OSX 10.12.6.
Things I've tried:
Running the docker run command as sudo.
Adding the --privileged flag to docker run.
chmod 777 on /var/log/scope on the host machine.
I specifically need that directory in both host and container, as our code and log handler both expect that location and it would be too large of a change for this current issue to alter that.
Any guidance on this would be appreciated. Most of the sources online that I can find suggest that this is an SELinux issue, but I'm not getting the "permission denied" error, and this is OSX, not linux.
I've also checked on the /var dir itself. It aliases to /private/var:
lrwxr-xr-x# 1 root wheel 11 Oct 30 10:54:11 2017 var -> private/var
And /private is by default a shared directory within the file sharing settings in docker.

Related

docker-compose docker-entrypoint-initdb.d Permission denied

I am trying to run the puppet pupperware suite (all 3 servers/puppet server/puppet DB/DB server).
I am using the official Yaml file provided by puppetlabs for docker compose : https://github.com/puppetlabs/pupperware/blob/master/docker-compose.yml
When I run that Yaml file in docker compose however, I am running into the following error (from docker-compose logs):
postgres_1 | ls: cannot open directory '/docker-entrypoint-initdb.d/': Permission denied
And as a result, the build fails (only the puppet server comes up, but not the other ones).
My docker host is a Fedora 33 virtual machine running inside a Proxmox environment. Proxmox runs on the physical host.
I have disabled SELinux, and I am running docker (moby) rootless. My local user (uid 1000) can run docker without sudo.
I believe I need to set permission in the container (probably via a Dockerfile) but I am not sure how to change that and I am not sure how to use a Dockerfile and docker-compose simultaneously.
thank you for your help
The docker-compose file is from the Puppet 6 era. The docker images that the Pupperware setup currently pulls, are latest, which is Puppet 7.
I got my pre-existing setup functioning again by changing the image names to:
puppet/puppetserver:6.14.1
postgres:9.6
puppet/puppetdb:6.13.1
Maybe this works for you as well.
well, since it's been a month and you have no answers I will tell try to help you with what I know.
You should put a Dockerfile in the root of your project. It contains commands to be run by the docker daemon AND the commands run by the linux inside the container. Then it runs through the contents of your docker-compose.yml and runs the commands in there.
So to solve the permission problem you should add RUN, which executes the linux command in Bash and add data to the folder.
Also look at this answer

file mounts as directory instead of file in docker-in-docker (dind)

When this command docker run --rm -v $(pwd)/api_tests.conf:/usr/config/api_tests.conf --name api-automation local.artifactory.swg-devops.com/api-automation is ran, api_tests.conf file is mounting as a directory in container instead of file.
I went through Single file volume mounted as directory in Docker and few other similar questions on stack overflow but unable to get the right solution.
I have tested the same code in local mac laptop and here file from local machine mounts to container as a file but locally i don't have docker-in-docker setup.
I have Dockerfile as below.
FROM alpine:latest
MAINTAINER Basavaraj
RUN apk add --no-cache python3 \
&& pip3 install --upgrade pip
WORKDIR /api-automation
COPY . /api-automation
RUN pip --no-cache-dir install .
ENTRYPOINT "some command"
and I have the build.sh file as below,
#!/bin/bash
docker pull local.artifactory.swg-devops.com/api-automation
# creating file with name "api_tests.conf" by adding configuration data
echo "configuration data" > api_tests.conf
# it displays all the configuration data written to api_tests.conf
cat $(pwd)/api_tests.conf
docker run --rm -v $(pwd)/api_tests.conf:/usr/.aiops/config/api_tests.conf --name api-automation local.artifactory.swg-devops.com/api-automation
Now we are calling build.sh file from gocd environment.
Looks like docker run command executed in docker-in-docker(dind) and as a result client which spawns the docker container on a different host and the file (api_tests.conf) being created does not exist on that different host.
because of this file (api_tests.conf) is mounting as empty directory in container.
what are the different solutions to mount the file in docker-in-docker environment?
Can we share the file (api_tests.conf) which we created to host where the docker container is spawned?
I think the problem you're having is most likely because of using dind, although it's worth pointing out that this issue would also occur if you had mounted the docker socket into another container as well.
This is because when you ask the docker daemon to mount a directory, you're docker client (cli) actually mount the file/directory itself, it's just passing a request to the docker daemon to mount this location from its local file system. And this is where the problem is, because usually this isn't where you think it is, if you're using dind or sharing the docker.socket, and hence the file/directory doesn't exist from the daemons point of view.
So in your case the $(pwd) is possibly being expanded to some well known/existing directory path, and then the docker daemon is mounting this directory portion, since the file doesn't exist. That's my guess at least, as I've seen similar behaviour before when using dind/docker.socket sharing in other set ups.
One crazy solution to this would be to bind mount the files you want into the dind container at startup, and then you could try subsequently bind mounting those files from within the dind container into any subsequent containers. However bear in mind this is precisely the kind of file system usage that's warned against in the dind documentation because of instability and potential data loss, so be warned.
Hope this helps.

Change file permissions in mounted folder inside docker container on Windows Host

Disclaimer/Edit 2
Some years later, for everyone reading this question - If you are on Windows and want to use docker with linux containers, I highly recommend not using docker for windows at all and instead starting the entire docker environment inside a VM altogether. This Ext3 NTFS issue will break your neck on so many different levels that installing docker-machine might not even be worth the effort.
Edit:
I am using docker-machine which starts a boot2docker instance inside a Virtualbox VM with a shared folder on /c/Users from which you can mount volumes into your containers. The permissions of said volumes are the ones the question is about. The VMs are stored under /c/Users/tom/.docker/
I chose to use the docker-machine Virtualbox workflow over Hyper-V because I need VBox in my daily workflow and running Hyper-V and Virtualbox together on one system is not possible due to incompabilities between different Hypervisors.
Original question
I am currently trying to set up PHPMyAdmin in a container on windows but I can't change the permissions of the config.inc.php file.
I found: Cannot call chown inside Docker container (Docker for Windows) and thought this might be somewhat related but it appears to apply only to MongoDB.
This is my docker-compose.yml
version: "3"
services:
pma:
image: (secrect company registry)/phpmyadmin
ports:
- 9090:80
volumes:
- /c/Users/tom/projects/myproject/data/var/www/public/config.inc.php:/var/www/public/config.inc.php
now, when I docker exec -it [container] bash and change in the mounted directory, I try to run chmod on the config.inc.php but for some reason, it fails silently.
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
root#22a4bag43245: chmod 655 config.inc.php
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
Considering the linked answer, I thought I could just move the volume out of my Userhome but then vbox doesn't mount the folder at all.
How do I change the file permissions of /var/www/public/config.inc.php persistently?
I had the same problem of not being able to change ownership even after using chown. And as I researched, it was because of NTFS volumes being mounted inside ext filesystem. So I used another approach.
The volumes internal to docker are free from these problems. So you can mount your file on internal docker volume and then create a hard symlink to that file inside your local folder wherever you want:
sudo ln $(docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>) <absolute_path_of_destination>
This way you can have your files in desired place, inside docker and without any permission issues, and you will be able to modify the contents of file as in the normal volume mount due to hard symlink.
Here is a working implementation of this process which mounts and links a directory. In case you wanna know about the details, see possible fix section in issue.
EDIT
Steps to implement this approach:
Mount the concerned file in internal docker-volume(also known as named volumes).
Before making hardlink, make sure volumes and concerned file are present there. To ensure this, you should have run your container at least once before or if you want to automate this file creation, you can include a docker run which creates the required files and exits.
docker run --rm -itd \
-v "<Project_name>_<volume_name>:/absolute/path" \
<image> bash -c "touch /absolute/path/<my_file>"
This docker run will create volumes and required files. Here, container is my project name, by default, it is the name of the folder in which project is present and <volume_name> is the same as one which we want to use in our original container. <image> can be the same one which is already being used in your original containers.
Create a hardlink in your OS to the actual file location on your system. You can find the file location using docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>/<my_file>. Linux users can use ln in terminal and windows users can use mklink in command prompt.
In step 3 we have not used /absolute/path since the <volume_name> refers to that location already, and we just need to refer to the file.
Try one of the following:
If you can rebuild the image image: image: (secrect company registry)/docker-stretchimal-apache2-php7-pma then inside the docker file, add the following
USER root
RUN chmod 655 config.inc.php
Then you can rebuild the image and push it to the registry, and what you were doing should work. This should be your preferred solution, as you don't want to be manually changing the permissions everytime you start a new container
Try to exec using the user root explicitly
docker exec -it -u root [container] bash

Auto-restart Docker container when contents of host folder change

I am running a Docker container in CoreOS (host) and mounted a host folder with a container's folder.
docker run -v /home/core/folder_name:/folder_name <container_name>
Now, each time I am changing (insert/delete) some file in that host folder (folder_name), I have to restart the container (container_name) to see the effects.
docker restart <container_name>
Is there any way from the host side or docker side to restart it automatically when there is a change (insert/delete) in the folder?
Restarting the docker container on a folder change is rather antithetical to the whole notion of the -v command in the first place. If you really really really need to restart the container in the manner you are suggesting then the only way to do it is from the docker host. There are a couple tools (I can name off the top of my head, there are definitely more) you could use to monitor the host folder and when a file is inserted or deleted you could trigger the docker restart <container_name> command. Those tools are incron and inotify-tools. Here is another question someone asked similar to yours and the answer recommended using one of the tools I suggested.
Now, there is no way that the files in the host folder are not being changed in the docker container as well. It must be that the program you are using in the docker container isn't updating it's view of the /folder_name folder after it starts up. Is it possible for you to force the program you are running in the docker container to refresh or update? The -v command works via bind mounting and has been a stable feature in docker for quite a while. With bind mounting, the home/core/folder_name folder IS (for all practical purposes) the same folder as /folder_name in the container.
run the command
docker run -t -i -v /home/core/folder_name:/folder_name <container_name> /bin/sh
This command gives you an interactive shell within the container. In this shell issue the command:
cd /folder_name; touch a_file
Now go to /home/core/folder_name on the docker host in a shell or some file browser. The file a_file will be there. You can delete that file on the host and go back to the shell running in the docker container and run ls /folder_name. The file a_file will not be there.
So, you either need to use inotify or incron to go about restarting your container anytime a file changes on the host, or figure out how to work with the program you are running in the docker container to have it update its view of the /folder_name folder.

How to mount a directory in a Docker container to the host?

Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)

Resources