docker-compose caches run results - docker

I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.

The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker

Related

Docker Desktop on Hyper-V - bind mount do not propagate inotify on file copy

I have a Docker Desktop installed on my dev machine, with WSL 2 disabled. I have shared my entire C:/ drive:
Then I have a container that inside has a .net 6 (Core) application that uses the FileSystemWatcher to observe one directory, and when a file is pasted inside to read it.
I red in several articles in the internet that WSL2 do not support notification to propagate from the Windows file system to the underlying Linux distribution that docker is running on, hence there is no way that I can bind the directory that I have to "watch" with the app in the container. So I swithed to the old Hyper-V support of docker.
I run the container with the following command:
docker run `
--name mlc-importer `
-v C:/temp/DZBank:/opt/docker/mlc_importer/dfs/DZBank `
-v C:\temp\appsettings.json:/app/appsettings.json `
-v C:\temp\log4net.config:/app/log4net.config `
mlc-importer
The container starts and starts "watching" for new files. The strange thing is, that when I cut a file and paste it in the directory, the app in the container registers the new file and reads it, but when I copy the file and paste in in the directory, the app in teh container do not register it and read it.
Can someone help me because I can't find out what the problem might comes from.
Thanks in advance,
Julian
I managed to solve mu problem, and I'll post it here if somebody encounters the same problem.
The problem was in teh file itself. This I found out when I started a new container with only debian, and installed inotify-tools, and binded the same path. When I tried to copy the file and paste it in the binded dir the output was:
Three times MODIFY event.
When I tried to cut the file and paste in in the new dir the events were:
So with copy - three times MODIFY, with cut one CREATE and two MODIFY.
Then I inspected the copied file and saw this:
When I checked the checkbox and hit ok, everything is ok. And since in the container app (from the post), I hook to only "File created" callback, it not triggers when the file is only modified.
Hope this helps someone with a similar problem

Change default volume mount point for docker rootless?

I saw this post with different solutions for standard docker installation:
How to change the default location for "docker create volume" command?
At first glance I struggle to repeat the steps to change the default mount point for the rootless installation.
Should it be the same? What would be the procedure?
I just got it working. I had some issues because I had the service running while trying to change configurations. Key takeaways:
The config file is indeed stored in ~/.config/docker/. One must make a daemon.json file here in order to change preferences. We would like to change the data-root option (and storage-driver, in case the drive does not have capabilities
To start and stop the headless service one runs systemctl --user [start | stop] docker.
a. Running the systemwide service starts a parallel and separate instance of docker, which is not rootless.
b. When stopping make sure to stop the docker.socketfirst.
Sources are (see Useage section for rootless)
and (config file information)
We ended up with the indirect solution. We have identified the directory where the volumes are mounted by default and created a symbolic link which points to the place where we actually want to store the data. In our case it was enough. Something like that:
sudo ln -s /data /home/ubuntu/.local/share/docker/volumes"

Provide GitHub file as default conf file in a docker-compose volume

So my question is whether it is possible to have a volume like:
"${my_conf_file}:-raw.my/GitHub/file.git":/conf.json
So this would be my goal, however I do not find anything related to this. In the end if the user has a file, the file should be passed, otherwise either conf.json should not be replaced by anything (because the GitHub file is already there, to be replaced by a conf file that a user might have) or the file from GitHub should be passed again.
If it best to figure out the first part ("${my_conf_file}:-raw.my/GitHub/file.git") ahead of the docker run.
In your start script (which calls docker run or uses your docker-compose.yml), add a script able to determine which config file you want (the user's, conf.json itself or the one from GitHub)
Once you can script that, then you can add your docker run -v call, which will mount the right file to :/conf.json in the container.

Need to remove symbolic link I created for /var/lib/docker/ but can't find it. Unable to use docker and feel helpless

I'm feeling really terrible atm so any help would be really appreciated. I kept running out of space when downloading docker images on /var, so I decided I needed to change the location for where docker was installing images. I tried several methods but had no success. First, I tried creating daemon.json in etc/docker and mapping data-root to a place with more storage (data2/docker). I stopped docker, moved everything over, made the file, but no dice. The docker daemon wouldn't start.
Then, I saw this method https://stackoverflow.com/a/49743270/13034460 which involves creating a symbolic link between /var/lib/docker and the new directory (data2/docker). I followed his instructions:
Much easier way to do so:
Stop docker service: sudo systemctl stop docker
Move existing docker directory to new location sudo mv /var/lib/docker/ /path/to/new/docker/
Create symbolic link
sudo ln -s /path/to/new/docker/ /var/lib/docker
Start docker service
sudo systemctl start docker
Well, this didn't work for me. I can't find the error message b/c it's too far up in my terminal, but it was along the lines of "you don't have enough storage/we don't know where to store this image". /data2/docker should have tons of storage so that can't be the issue.
But the big problem now is that this symbolic link exists and I can't figure out how to get rid of it. I tried removing everything related to docker on the computer, uninstalling, then reinstalling docker (which always used to work for me if there were any issues). But when I reinstall, it won't even run docker hello-world b/c of the link (I think). I get a message:
docker: open /data2/docker/tmp/GetImageBlob289478576: no such file or directory
So...it's looking in data2/docker because of the symbolic link (I assume), but that directory doesn't exist anymore. But neither does /var/lib/docker! All I want is to delete this link and get everything back to fresh defaults. I can worry about the storage issue another time. If I can't use docker at all, I'm so screwed. I've tried looking in every directory to find the link using -ls -l, but I can't find it. I used the exact code that the above references when I created the link (just my paths instead).
I would be so grateful to anyone who could help--I'm so lost on this. Thank you!

"docker-compose up" fails with error

I want to work on a project, but I need to use docker for running the app, but the docker-compose up command fails with this error:
System error: exec: "./wait_to_start": stat ./wait_to_start:
no such file or directory
The wait_to_start command is an executable python script in the subfolder backend/.
I need to determine why it cannot be executed. Either it's been searched in the wrong path, or there are access right problems, or maybe the wrong python version is used.
Can I debug it with details, or login with SSH and check the files on the virtual machine? I'm too unexperienced with Docker...
You can either set the "workdir" metadata to make sure you are in the right place when you start a container or simply call /backend/wait_to_start instead of ./wait_to_start so you remove the need to be in the proper directory.
Do debug with docker-compose I would do this:
docker-compose run --entrypoint bash <servicename>
That should give you a prompt and let you inspect the file and working directory, so see what's wrong.

Resources