docker unable to create mount path and fails always - docker

docker unable to create mount path /foo/logs, which is being shared across all containers in my docker-compose file. This error happens occasionally and not sure what is causing this. We have to restart docker or the machine to get around this problem now on the latest version of docker-compose
volumes:
- /foo/logs:/app/service/logs
docker-compose -f docker-compose.yaml --env-file=.env.qa up -d
Starting foo-service-container ... error
ERROR: for foo-service-container Cannot start service foo-service: error while creating mount source path '/foo/logs': mkdir /foo: read-only file system
ERROR: for foo-service Cannot start service foo-service: error while creating mount source path '/foo/logs': mkdir /foo: read-only file system
Various containers have their respective volume setting as follows, so all container logs are spooled to a single location in a machine. Not sure how to make this robust
- /foo/logs:/app/foo-service/logs
- /foo/logs:/app/foo-service1/logs
- /foo/logs:/app/foo-service2/logs
- /foo/logs:/app/foo-service3/logs
- /foo/logs:/app/foo-service4/logs

It seems that the /foo/logs filesystem is mounted as read-only.
Use mount | grep "/foo/logs" to check the options for the /foo/logs and re-mount if needed - something like mount -o remount,rw /foo/logs (I'm assuming /foo/logs is not under your root file system).
Also, bind mounts between multiple containers might result in issues. Ownership of files might become messy - use volumes instead.

Docker is trying to create a directory that will then be mounted into the containers.
If you try to create the directory by yourself, ie mkdir /foo/logs you'll get a similar error.
This is unrelated to Docker and the problem resides in your permissions on that specific path.
Isolate the issue from Docker by changing all source paths (the part before the :) to a directory on which you have write access, for example /tmp/foo.
Note the issue is not with /foo/logs but with /foo, as the message says it can not create /foo in a read-only system; it did not even reach the point of creating logs inside /foo.

Related

Why can't my Docker container find the file it's supposed to create?

I have a Docker container (Linux container running on Windows with VLS 2) running a .NET Core 5.0 application, whose Dockerfile and docker-compose.yml were created by someone else. I spun it up with docker run and passing a single environment variable and port mapping. It works just fine until it attempts to create a file, which it attempts to do with a statement like this: System.IO.File.WriteAllText($"/output_json/myfile.json", jsonString);, and errors out. The error message says
Could not find a part of the path '/output_json/myfile.json'.
Since a Docker container is essentially a virtualized filesystem, I assume I need to allocate some space to the container, or share a folder on the host machine with it, so that it has an accessible location to save the file. Is that correct?
EDIT: I've just found this in docker-compose.yml:
services:
<servicename>:
volumes:
- ./output:/output_json
Doesn't this mean that an output_json directory is supposed to be created? Does docker-compose not have any bearing on a container created with docker run?
The path /output_json probably doesn't exist in the docker image. That could be because you're meant to map a directory on your host to that path. Then the container can put it's output there and you can grab it after the container is done.
To try it, you can make an empty directory and map that to the /output_json path in your container by running the following 2 commands from a command line
mkdir %temp%\container_output
docker run -v %temp%\container_output:/output_json <other options> <image name>
Then do cd %temp%\container_output and see what output the container has made.

How do I mount a single file in docker-compose

I have a project that need to mount a single directory into the docker container, and I mount it in a similar way
agent:
    image: agent:latest
    container_name: agent
    volumes:
      - $PWD/status.txt:/status.txt
Is A Directory error occurs when I modify status.txt in open mode.
with open('status.txt','a') as f:
...
...
docker-compose seems to recognize files as directories.
I would appreciate it if you could tell me how to solve it?
I can mount files just fine using the same syntax, although I use a relative path, e.g.:
volumes:
- ./sourcedir/file.txt:/target/dir/mountedfile.txt
Where mountedfile.txt is essentially file.txt. There must be something else in your environment causing this issue. To troubleshoot, there are a couple of things you could do, including:
Get a shell into the container and run stat on the target file, e.g. stat mountedfile.txt. That should tell you if it's a file or directory.
Test your configuration manually with plain docker using -v to mount the volumes, e.g.:
docker run --rm -ti -v /sourcedir/file.txt:/target/mountedfile.txt ubuntu:latest bash
Also, there may be some useful information in a (somewhat) unrelated answer.

docker volume over fuse : Transport endpoint is not connected

So I have this remote folder /mnt/shared mounted with fuse. It is mostly available, except there shall be some disconnections from time to time.
The actual mounted folder /mnt/shared becomes available again when the re-connection happens.
The issue is that I put this folder into a docker volume to make it available to my app: /shared. When I start the container, the volume is available.
But if a disconnection happens in between, while the /mnt/shared repo on the host machine is available, the /shared folder is not accessible from the container, and I get:
user#machine:~$ docker exec -it e313ec554814 bash
root#e313ec554814:/app# ls /shared
ls: cannot access '/shared': Transport endpoint is not connected
In order to get it to work again, the only solution I found is to docker restart e313ec554814, which brings downtime to my app, hence is not an acceptable solution.
So my questions are:
Is this somehow a docker "bug" not to reconnect to the mounted folder when it is available again?
Can I execute this task manually, without having to restart the whole container?
Thanks
I would try the following solution.
If you mount the volume to your docker like so:
docker run -v /mnt/shared:/shared my-image
I would create an intermediate directory /mnt/base/shared and mount it to docker like so:
docker run -v /mnt/base/shared:/base/shared my-image
and I will also adjust my code to refer to the new path or creating a link from /base/shared to /shared inside the container
Explanation:
The problem is that the mounted directory /mnt/shared is probably deleted on host machine, when there is a disconnection and a new directory is created after connection is back. But, the container started running with directory mapping for the old directory which was deleted. By creating an intermediate directory and mapping to it instead you avoid this mapping issue.
Another solution that might work is to mount the directory using bind-propagation=shared
e.g:
--mount type=bind,source=/mnt/shared,target=/shared,bind-propagation=shared
See docker docs explaining bind-propogation

How do I mount volumes on a docker container that is being created by another docker container?

I have a containerized REST API server using docker. I have mounted a folder on my host machine to this rest container using docker flag
--mount type=bind, source=<path/on/hostmachine>, target=<path/on/container>
This works perfectly and I can see real time mapping of files between my host and rest container. Now, my rest container generates a yaml file that contains the configuration for the new container that comes up and writes this configuration file to the shared volume mount.
I want to mount the same <path/on/hostmachine> that contains the configuration files generated by the rest container to the new container that comes up. The issue is, I do not bring the container up using the cli.
This is why I cannot use the --mount flag. I tried multiple ways to try doing it through the yaml file.
Blockquote
- Using absolute path as
volumes:
- <absolute/path/on/hostmachine>:<path/on/new/container>
did not work.
Blockquote
- Using volumes with type defined as
volumes:
- type: bind
source:<absolute/path/on/hostmachine>
target:<path/on/new/container>
did not work and gave volumes should be string error
How do I mount the same home directory to the new container that comes up on the rest container?
I figured out the issue in the first approach. I was using my path as "./Users/username/Desktop" instead of "/Users/username/Desktop" which was causing the path mismatch. I misunderstood the lack of errors. Rookie Mistake, Mea Culpa.

How can I mount a file in a container, that isn't available before first run?

I'm trying to build a Dockerfile for a webapp that uses a file-based database. I would like to be able to mount the file from the host*
The file is in the root of the complete software install, so it's not really ideal to mount that complete dir.
Another problem is that before the first use, the database-file isn't created yet. A first time user won't have a database, but another user might. I can't 'mount' anything during a build** I believe.
It could probably work like this:
First/new database start:
Start the container (without mount).
The webapp creates a database.
Stop the container
subsequent starts:
Start the container using a -v to mount the file
It would be better if that extra start/stop isn't needed for a user. Even if it is, I'm still looking for a way to do this userfriendly, possibly having 2 'methods' of starting it (maybe I can define a first-boot thing in docker-compose as well as a 'normal' method?).
How can I do this in a simpel way, so that it's clear for any first time users?
* The reason is that you can copy your Dockerfile and the database file as a backup, and be up and running with just those 2 elements.
** How to mount host volumes into docker containers in Dockerfile during build
One approach that may work is:
Start the database in the build file in such a way that it has time to create the default file before exiting.
Declare a VOLUME in the Dockerfile for the file after the above instruction. This will cause the file to be copied into the volume when a container is started, assuming you don't explicitly provide a host path
Use data-containers rather than volumes. So the normal usage would be:
docker run --name data_con my_db echo "my_db data container"
docker run -d --volumes-from data_con my_db
...
The first container should exit immediately but set up the volume that is used in the second container.
I was trying to achieve something similar and managed to do it by mounting a folder, instead of the file, and creating a symlink in the Dockerfile, initially pointing to a non-existing file:
docker-compose.yml
version: '3.0'
services:
bash:
build: .
volumes:
- ./data:/data
command: ['bash']
Dockerfile
FROM bash:latest
RUN ln -s /data/.bash_history /root/.bash_history
Then you can run the container with:
docker-compose run --rm bash
With this setup, you can push an empty "data" folder into the repository for example (and exclude its content with .gitignore). In the first run, inside the container /root/.bash_history will be a "broken" symlink, pointing to a file that does not exist. When you exit the shell, bash will write the history to /root/.bash_history, which will end up in /data/.bash_history.
This is probably not the correct approach.
If you have multiple containers that are trying to share some information through the file-system, you should probably let them share some directory.
That way, the flow is simple and very hard to get wrong.
You simply mount the same directory, say /data (from the host's perspective) into all the containers that are trying to use it.
When an application starts and it can't find anything inside that directory, it can gracefully stop and exit with a code that says: "Cannot start, DB not initialized yet".
You can then configure some mechanism with a growing timeout to try and restart that container until you're successful.
On the other hand, the app that creates the DB can start and create it inside the directory or find an existing file to use.

Resources