How do I mount a single file in docker-compose - docker

I have a project that need to mount a single directory into the docker container, and I mount it in a similar way
agent:
    image: agent:latest
    container_name: agent
    volumes:
      - $PWD/status.txt:/status.txt
Is A Directory error occurs when I modify status.txt in open mode.
with open('status.txt','a') as f:
...
...
docker-compose seems to recognize files as directories.
I would appreciate it if you could tell me how to solve it?

I can mount files just fine using the same syntax, although I use a relative path, e.g.:
volumes:
- ./sourcedir/file.txt:/target/dir/mountedfile.txt
Where mountedfile.txt is essentially file.txt. There must be something else in your environment causing this issue. To troubleshoot, there are a couple of things you could do, including:
Get a shell into the container and run stat on the target file, e.g. stat mountedfile.txt. That should tell you if it's a file or directory.
Test your configuration manually with plain docker using -v to mount the volumes, e.g.:
docker run --rm -ti -v /sourcedir/file.txt:/target/mountedfile.txt ubuntu:latest bash
Also, there may be some useful information in a (somewhat) unrelated answer.

Related

I want to use the functions written in the .vimrc file placed on the host side within the Docker

what I want to accomplish
I want to use the functions written in the .vimrc placed on the host side within Docker.
what I did
Put the .vimrc file in the /home/akihiro directory on the host side.
When using the docker run command, mount the /home/akihiro directory on the host side and run the python file in Docker with Vim.
akihiro#akihiro-thinkpad-x1-carbon-5th:~$ docker run --rm -it -v /home/akihiro:/home --name test cnn_study:latest
As a result, the settings written in the .vimrc file did not work.
Next, I started a new container without mounting.
Created /home/akihiro directory in the container.
I left the container.
I copied only the /home/akihiro/.vimrc file on the host side into the container, and re-entered the container.
docker cp ./.vimrc 52b28f1ffea8:/home/akihiro
Started up a Python file using Vim.
As a result, the settings written in the .vimrc file did not work.
What you are doing is mapping the complete /home or /home/akihiro directory on the Container.
You can't do that for 2 reasons:
There are more files in that directory. What do you expect happens with them?
It's not like a OR-function is done on files in both folder.
Mapping the volume complete replaces the internal folder
My gut feeling says that mapping the direcory comes too late in the process.
The directory is already there in the Container and therefore cannot be overwritten.
(At least that's how I understand the strange effects I get with mapping sometimes)
What you should do is only map the file:
$ docker run --rm -it -v /home/akihiro/.vimrc:/home/akihiro/.vimrc --name test cnn_study:latest
I do the same with .bashrc (to set the prompt to the name of the Container)

docker unable to create mount path and fails always

docker unable to create mount path /foo/logs, which is being shared across all containers in my docker-compose file. This error happens occasionally and not sure what is causing this. We have to restart docker or the machine to get around this problem now on the latest version of docker-compose
volumes:
- /foo/logs:/app/service/logs
docker-compose -f docker-compose.yaml --env-file=.env.qa up -d
Starting foo-service-container ... error
ERROR: for foo-service-container Cannot start service foo-service: error while creating mount source path '/foo/logs': mkdir /foo: read-only file system
ERROR: for foo-service Cannot start service foo-service: error while creating mount source path '/foo/logs': mkdir /foo: read-only file system
Various containers have their respective volume setting as follows, so all container logs are spooled to a single location in a machine. Not sure how to make this robust
- /foo/logs:/app/foo-service/logs
- /foo/logs:/app/foo-service1/logs
- /foo/logs:/app/foo-service2/logs
- /foo/logs:/app/foo-service3/logs
- /foo/logs:/app/foo-service4/logs
It seems that the /foo/logs filesystem is mounted as read-only.
Use mount | grep "/foo/logs" to check the options for the /foo/logs and re-mount if needed - something like mount -o remount,rw /foo/logs (I'm assuming /foo/logs is not under your root file system).
Also, bind mounts between multiple containers might result in issues. Ownership of files might become messy - use volumes instead.
Docker is trying to create a directory that will then be mounted into the containers.
If you try to create the directory by yourself, ie mkdir /foo/logs you'll get a similar error.
This is unrelated to Docker and the problem resides in your permissions on that specific path.
Isolate the issue from Docker by changing all source paths (the part before the :) to a directory on which you have write access, for example /tmp/foo.
Note the issue is not with /foo/logs but with /foo, as the message says it can not create /foo in a read-only system; it did not even reach the point of creating logs inside /foo.

Save Docker container file to host filesystem

I have a Docker container that runs a Python app and generates a file as a result.
I would like to persist this file in my local file system and not to lose it at the end of the container's execution.
After reading other posts and documentation of volumes and data storage in Docker I have tried to solve it in different ways like:
Using a volume (created before):
docker run --name my_container -v my_volume:/container_file_path my_container
I'm missing something here because I understand that I'm not referencing the host's route at any time.
Or directly referencing host path and container path (with this option I also got some problems with absolute or relative paths usage):
docker run --name my_container -v host_path:container_file_path my_container
I also tried some other "variants" of the commands above (--mount instead of -v, changing target/source values, etc.) but I couldn't get it to work.
I'm using Windows Subsystem for Linux (WSL), which I've read may be the cause of the problem.
Could you guide me in what I am doing wrong? Thanks!
The second option with the bind-mount volume sounds good to me. You do need to use absolute paths, but you can use e.g. $(pwd) to make it simpler.

Mounting the host Vm in docker-compose

I want to mount the host vm in Docker for Windows in my container (for backup purpose). I found a short article about it which says to run
docker container run --rm -it -v /:/host alpine
to do this. I tried and it works fine. Now i wanted to put it into a docker-compose file. However
volumes:
- / :/host:ro
doesn't work. I don't get an error message, the folder is just empty. The space in front of the colon was necessary, without it I got an error.
Does someone know how to set this up in docker-compose?
Just write the same way you did on docker run, do not put any space between directories.
volumes:
- /:/host:ro
But with :ro docker will not write anything to it and will simply read the contents of the directory. If you want to write, remove the :ro.

How can I mount a file in a container, that isn't available before first run?

I'm trying to build a Dockerfile for a webapp that uses a file-based database. I would like to be able to mount the file from the host*
The file is in the root of the complete software install, so it's not really ideal to mount that complete dir.
Another problem is that before the first use, the database-file isn't created yet. A first time user won't have a database, but another user might. I can't 'mount' anything during a build** I believe.
It could probably work like this:
First/new database start:
Start the container (without mount).
The webapp creates a database.
Stop the container
subsequent starts:
Start the container using a -v to mount the file
It would be better if that extra start/stop isn't needed for a user. Even if it is, I'm still looking for a way to do this userfriendly, possibly having 2 'methods' of starting it (maybe I can define a first-boot thing in docker-compose as well as a 'normal' method?).
How can I do this in a simpel way, so that it's clear for any first time users?
* The reason is that you can copy your Dockerfile and the database file as a backup, and be up and running with just those 2 elements.
** How to mount host volumes into docker containers in Dockerfile during build
One approach that may work is:
Start the database in the build file in such a way that it has time to create the default file before exiting.
Declare a VOLUME in the Dockerfile for the file after the above instruction. This will cause the file to be copied into the volume when a container is started, assuming you don't explicitly provide a host path
Use data-containers rather than volumes. So the normal usage would be:
docker run --name data_con my_db echo "my_db data container"
docker run -d --volumes-from data_con my_db
...
The first container should exit immediately but set up the volume that is used in the second container.
I was trying to achieve something similar and managed to do it by mounting a folder, instead of the file, and creating a symlink in the Dockerfile, initially pointing to a non-existing file:
docker-compose.yml
version: '3.0'
services:
bash:
build: .
volumes:
- ./data:/data
command: ['bash']
Dockerfile
FROM bash:latest
RUN ln -s /data/.bash_history /root/.bash_history
Then you can run the container with:
docker-compose run --rm bash
With this setup, you can push an empty "data" folder into the repository for example (and exclude its content with .gitignore). In the first run, inside the container /root/.bash_history will be a "broken" symlink, pointing to a file that does not exist. When you exit the shell, bash will write the history to /root/.bash_history, which will end up in /data/.bash_history.
This is probably not the correct approach.
If you have multiple containers that are trying to share some information through the file-system, you should probably let them share some directory.
That way, the flow is simple and very hard to get wrong.
You simply mount the same directory, say /data (from the host's perspective) into all the containers that are trying to use it.
When an application starts and it can't find anything inside that directory, it can gracefully stop and exit with a code that says: "Cannot start, DB not initialized yet".
You can then configure some mechanism with a growing timeout to try and restart that container until you're successful.
On the other hand, the app that creates the DB can start and create it inside the directory or find an existing file to use.

Resources