Docker Desktop on Hyper-V - bind mount do not propagate inotify on file copy - docker

I have a Docker Desktop installed on my dev machine, with WSL 2 disabled. I have shared my entire C:/ drive:
Then I have a container that inside has a .net 6 (Core) application that uses the FileSystemWatcher to observe one directory, and when a file is pasted inside to read it.
I red in several articles in the internet that WSL2 do not support notification to propagate from the Windows file system to the underlying Linux distribution that docker is running on, hence there is no way that I can bind the directory that I have to "watch" with the app in the container. So I swithed to the old Hyper-V support of docker.
I run the container with the following command:
docker run `
--name mlc-importer `
-v C:/temp/DZBank:/opt/docker/mlc_importer/dfs/DZBank `
-v C:\temp\appsettings.json:/app/appsettings.json `
-v C:\temp\log4net.config:/app/log4net.config `
mlc-importer
The container starts and starts "watching" for new files. The strange thing is, that when I cut a file and paste it in the directory, the app in the container registers the new file and reads it, but when I copy the file and paste in in the directory, the app in teh container do not register it and read it.
Can someone help me because I can't find out what the problem might comes from.
Thanks in advance,
Julian

I managed to solve mu problem, and I'll post it here if somebody encounters the same problem.
The problem was in teh file itself. This I found out when I started a new container with only debian, and installed inotify-tools, and binded the same path. When I tried to copy the file and paste it in the binded dir the output was:
Three times MODIFY event.
When I tried to cut the file and paste in in the new dir the events were:
So with copy - three times MODIFY, with cut one CREATE and two MODIFY.
Then I inspected the copied file and saw this:
When I checked the checkbox and hit ok, everything is ok. And since in the container app (from the post), I hook to only "File created" callback, it not triggers when the file is only modified.
Hope this helps someone with a similar problem

Related

Can I run scripts from a docker build context without a copy?

I want to build on top of a windows docker container by installing a couple programs. The files total .5 GB and I want to keep the layers as small as possible. I am hoping I can run the setup files from the build-context, and then have the build-context swept away at the end so I don't have a needless copy of the source files for the setup.exe embedded in my container layers. However, I have not found an example where this is the case. Instead I mostly see people run a COPY command to a temporary build folder, run their setup, then remove the folder. Won't those files still be in the container layers because the COPY command creates a new layer when it's done?
I don't know if the container can see the build-context directly. I was hoping for some magical folder filled with the build-context files so I could run a script using it, but haven't found anything.
It seems like the alternative is to create a private file-server and perform a RUN that can download them from that private server and unpack them, run the install, and remove them (all as 1 docker step). I understand this would make it more available to others who need to rerun the build, but I'm not convinced we'll need to rerun it. It's not likely to change as the container will build patches for a legacy application. Just seems like a lot to host files on a private, public-facing server for something that will get called once every couple years if ever.
So are these my two options?
Make a container with needless copies of source files embedded within
Host the files on a private file server and download/install/remove them
Or am I missing another option or point about how the containers work?
It's a long shot as Windows is a tricky thing with file system, but you could do this way:
In your Dockerfile use a COPY command, install then RUN del ... to remove the installation files
Build your image docker build -t my-large-image:latest .
Run your image docker run --name my-large-container my-large-image:latest
Stop the container
Export your container filesystem docker export my-large-container > my-large-container.tar
Import the filesystem to a new image cat my-large-container.tar | docker import - my-small-image
Caveat is you need to run the container once which might not be what you want. And also I haven't tested with windows container, sorry.
I usually do the download or copy in one step, then in the next step I do the silent installation and remove the installer.
# escape=`
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ADD https://download.visualstudio.microsoft.com/download/pr/6afa582f-fa26-4a73-8cb9-194321e85f8d/ecea51ead62beb7acc73ad9799511ffdb3083ad384fe04ec50e2cbecfb426482/VS_RemoteTools.exe VS_RemoteTools_x64.exe
RUN Start-Process .\\VS_RemoteTools_x64.exe -ArgumentList #('/install','/quiet','/norestart') -NoNewWindow -Wait; `
Remove-Item -Path C:/VS_RemoteTools_x64.exe -Force;
But otherwise, I don't think you can mount a custom volume while it's being built.
I didn't find a satisfactory answer to this. Docker seems designed for only the modern era and assumes you'll be able to download what you need via scripts and tools hitting APIs and file servers. The easiest option I found that I eventually went with was to host the files on a private file server or service (in my case, AWS S3).
I really wish there was a way to have files hosted by the docker daemon in some way, eg. if it acted like a temporary server that you could get data from via http instead of needing to COPY the files and create a layer. Alas, I found no such feature.
Taking this route made my container about a GB smaller.

Docker Failed to Initialize on Windows

Here, I have problem regarding pulling docker-dev in docker image for making my development environment but when I tried to pull docker-dev. I got the error like docker manifest not found.
Can anyone help me out with this error...plz
before this
I want to know about the docker failed to initialize error which i'm having right now...
the error is like,
I tried so many things like re-install the docker desktop or WSL updates, but didn't worked.
And error in the command be like...
So if someone can help me out with this....plz help me out
Got the same issue and fixed it by deleting %appdata%\Docker as mentioned by Github User "tocklime"
(Original Source : https://github.com/docker/for-win/issues/3088)
Short solution: delete %appdata%\Docker\settings.json and let Docker to create a new one.
Take a backup of the file for the next time it gets broken.
<tl;dr>
I face this issue almost every month and I hope this will get fixed definitely.
Following tmBlackCape answer, I checked the %appdata%\Docker directory and found settings.json damaged (editor tells it's a binary file and of course it shouldn't).
I deleted the file and Docker Service (still running) created a new one with default values. If the service isn't running, just launch it again.
You could need to change settings (via GUI, as recommended) to catch your needs.
I made a backup copy of my custom settings.json so next time I can replace the broken one without losing custom configuration.
Go to the directory C:\Users-------\AppData\Roaming\Docker and delete the file settings.json . Docker takes care of rewriting it at startup.
This manipulation solved the problem for me !
Docker failed to initialize
C:\Users[USER]\AppData\Local\Docker
C:\Users[USER]\AppData\Roaming\Docker
C:\Users[USER]\AppData\Roaming\Docker Desktop
Once deleted above directory, I didn’t have to do anything else, Docker Desktop started booting up as normal.
The error message I got was not exactly the same the OP got. For me, it said Docker failed to initialize. Docker Desktop is shutting down.
TL;DR
The powershell executable was missing from my local machies PATH. I had to add C:\Windows\System32\WindowsPowerShell\v1.0 and docker started again.
The longer story
I tried everything that was mentined in this thread and nothing worked for me. When I looked at the task manager of my local machine to see if any docker-related process was started, I noticed that Docker Desktop.exe itself was started. However, the com.docker.backend.exe not. Docker tries to start that exe in an infinite loop, but as soon as it started it crashed again after half a second.
I then took a look into com.docker.backend.exe.log, which is located in %localappdata%\Docker\log\host and noticed the following line:
[2022-07-07T10:46:57.936079700Z][com.docker.backend.exe][F] exec: "powershell": executable file not found in %PATH%
I then went ahead and just added the path to powershell to PATH (which is C:\Windows\System32\WindowsPowerShell\v1.0)`. As soon as I added that, everything started working again.
I have no idea how the path to the powershell executable got removed from PATH. I certainly did not do that myself.
This happened to me after Docker Desktop upgrade to version 3.6.0 (67351), too. (Which was surprising, because it worked before the upgrade.)
Due to the help in the top answer right now, I went to the above settings directory: %appdata%\Docker, looked at the logs and deleted/renamed the file settings.json -> Docker Desktop started immediatelly ; there had been a process retrying in the background.
In the time before that, the backend.exe.log had been all "unmarshal" something something:
settings.json: json: cannot unmarshal bool into Go struct field Content.proxyHttpMode of type string"
common/cmd/com.docker.backend/internal/settingsloader.GetSettings(0x0, 0x0, 0x0)
So, the above message 'tipped me off' as to where the actual error on start may be. Hmm...
For me the solutions here didn't help, but here's what helped.
Make sure the folder .docker/ in your home directory isn't marked as hidden in Windows. If it's hidden, Docker won't see it.
Make sure docker has Active Directory to .docker folder. For example, if the owner of .docker/ is SYSTEM and not your user, Docker won't be able to read it and crash.
For me deleting the folder %appdata%\Docker did not work.
Instead I had to run the following power shell command as admin.
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
References.
https://stackoverflow.com/a/63845592/1977871
https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v#enable-hyper-v-using-powershell
My windows features with running docker desktop are as follows.
It resolved me by deleting the file
C:\Users{username}\AppData\Roaming\Docker\settings.json
I had the same issue. I'm on Docker Desktop 4.8.0. The following solution worked for me:
Uninstall Docker Desktop
Delete .docker folder from C:\Users\{Username}
Delete Docker folder from C:\Users\{Username}\AppData\Local
Delete Docker folder from C:\Users\{Username}\AppData\Roaming
Delete Docker folder from C:\Program Files
Run CCleaner on Registry and Custom Clean
Restart Computer
Install Docker
Restart Computer
Start Docker
That's all.
Go to C:\Users\asd\AppData\Local and delete Downloaded Installation directory.
C:\Users\asd\AppData\Roaming and delete Docker and Docker Desktop directory.
Then start docker.
Didn't found the AppData folder in users[myUser] or anywhere. re-install solve it for me
latest update 4.10.1 has issues, i downgraded to 4.6.1 and it worked however i think powershell is closely linked to this issue
It happened due to local data corruption. You can check in VM log inside %appdata%\Docker.
I was able to reset and restart by renaming setting.json and then restart the Docker desktop by deleting all the previous log folder and temp folder clean. If this does not help.
Try to kill all the processes running in through the task manager
and then run Power shell in administrator mode and then shoot the command if you are using an older version of Lsmanager Restart-Service LxssManager.
On my side, uninstalling HyperV did the trick :
Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Hypervisor
I also (as suggested in the voted answer):
Uninstalled docker desktop,
Deleted any docker related content in %APPDATA% as suggested
Delete any docker related keyys in registry (may be not necessary)
Then reinstalled it
And it solved the problem (but without hyperV removal it didn't work).
There are a few great solutions; however, it didn't work for me. Most of the "fixes" suggested here are already implemented and it just didn't work. I chose a different approach.
I looked at the docker releases and saw what was actually affected by their latest version not working (4.10). These things are normal, especially if the software engine has been updated.
Going through Docker releases, the latest release that didn't touch the Docker engine is 4.8.0. I downloaded and works great.
Removing settings.json file or %AppData%\Docker folder did not work for me. After uninstalling/installing the docker, the issue was solved. (Note: I couldn't try other possible solutions that are required the admin privilege)
The below Steps worked for me,
Step 01:
Navigate to the below-mentioned paths and delete the below-mentioned directories,
C:\Users[USERNAME]\AppData\Local\Docker directory
C:\Users[USERNAME]\AppData\Roaming\Docker directory
C:\Users[USERNAME]\AppData\Roaming\Docker Desktop directory
Then restart the docker again
If you didn't find the AppData folder in users[USERNAME] or anywhere
Step 02: Type the below path in the "Run" to directly open the app data location for you
%appdata%\Docker
Then Delete the below-mentioned directories,
C:\Users[USERNAME]\AppData\Local\Docker directory
C:\Users[USERNAME]\AppData\Roaming\Docker directory
C:\Users[USERNAME]\AppData\Roaming\Docker Desktop directory
Then restart the docker again
Note: If these steps do not help you. Try to kill all the processes running in through the task manager. And again delete the directories and start the docker
if you still can't find the issue you may use this trick and i'm 100% sure this will work
first check your operating system is up to date means no remaining updates if you pass this step then come to 2 step open window command prompt in run as administrator
then type wsl --install then type wsl --list --online then wsl --install -d Debian
then open docker if error come
then install wsl 2 linux https://learn.microsoft.com/en-us/windows/wsl/install-manual#step-4---download-the-linux-kernel-update-package
then restart docker or your machine and then if error also occur then type then wsl --update i hope this will work Regard : Hammad Khadim

Alternative to using --squash when building docker images on Windows using local files

We have some local installers and zip files that we use to build our docker images. It is easy to get this to work in a Dockerfile:
FROM mcr.microsoft.com/windows/nanoserver
COPY myinstaller.exe .
RUN myinstaller.exe; \
del myinstaller.exe
The problem here is that it produces a layer for the COPY line, which increases the size of the image. A common work-around for this is to have one RUN line, that downloads the file from the Internet, runs commands, and then deletes the installation file. The problem, as written above, is that the installers are on the local filesystem.
I found that there is a --squash command for docker:
docker build --squash -t mytestimage .
This does exactly what I want: It gives me an image without this extra installer file that is not necessary. To run this command, you need to enable experimental features though. There is also an open issue to simply remove this feature:
https://github.com/moby/moby/issues/34565
Is there some alternative way of using local installers in a Dockerfile when running on Windows, that doesn't involve setting up a server to provide the files?
We ended up setting up nginx to provide files when building. On our build server, the machine building our docker images and the server that has the installer files have a very good connection between them, so downloading huge files is not a real problem.
When it comes to --squash, it is bugged for Docker on Windows. Here is the relevant issue for it:
https://github.com/moby/moby/issues/31468
There is an issue to move --squash out of experimental, but it doesn't seem to have a lot of support:
https://github.com/moby/moby/issues/38657
The alternative that some people propose instead of --squash is multi stage build, discussion here:
https://github.com/moby/moby/issues/34565
There is an alternative to --squash, if you have local installer files, you don't want to set up a web server, and you would like your docker image to be small, and you are running Windows: Use mapped drives.
In Windows, you can share folders with other users on your network. Docker containers are like another computer that is running on your physical machine, and it can access these network drives.
First set up a new user, for example username share and password password1. Create a folder somewhere on your computer. Then right click it, click properties, and then go to the Sharing tab and click "Share". Find the user that you have just created, using the little dropdown menu and Find people ..., and share the folder with this user.
Create a folder somewhere for your test project. Create a batch file setupshare.bat that looks like this:
#echo off
for /f "tokens=2 delims=:" %%i in ('ipconfig ^| findstr "Default Gateway"') do (
set hostip=%%i
goto :end
)
:end
set hostip=%hostip: =%
net use O: \\%hostip%\vms /USER:share password1
The first part of this file is only to find the ip address that the docker container can use to access its host computer. It is not the most pretty thing I've ever put together, so let me know if there's a better way!
It uses a for-loop, as that is the way to save the output of a command to a variable in batch files. The command is ipconfig, and we pipe it to findstr and searches for Default Gateway. We need to use ^| instead of just | because it is in a for-loop. The first part of the for-loop divides each line from the command on the delimiter, which is : in this case, and we only take the second token. The for-loop only handles the first line, if there are multiple entries with a Default Gateway. This script doesn't work if there are multiple entries and the first one is not the correct one.
The line set hostip=%hostip: =% is to remove a space at the start of the string.
We then have the IP address that we want to use stored in hostip. We use this in the net use command, which will map O:\ to shared folder vms on the machine with IP hostip. We use the username share and the password password1. Note that this is a very bad way of handling passwords, as they kind of should be secret!
With a batch file like this, we can set up a Dockerfile in this way:
# escape=`
FROM mcr.microsoft.com/dotnet/core/sdk:3.0
COPY setupshare.bat .
RUN setupshare.bat && `
copy O:\file.txt file.txt
The RUN command will first call setupshare.bat that sets up the network share properly. We can then use any file that we shared, for example a huge installer, and install the things that we want. In this case I have only shared a test file file.txt to see that it works, so just change that line.
I would still advice everyone to just set up a little web server, for example nginx, and use the standard way of writing Dockerfiles, with downloading files and running it in the same RUN command. That's what people expect when they see a Dockerfile, and it should be a more robust solution.
We can also hope that the Docker people either makes a COPY command that can copy, run, and delete installers in the same layer, or that --squash is implemented properly.

Unable to start any container when Volumes are enabled Docker Toolbox

I am running Docker Toolbox v. 1.13.1a on Windows 7 Pro Service pack 1 x64OS.
with Virtual Box Version 5.1.14 r112924
when I try to run any docker image e.g. official postgres image from Docker Hub with volumes disabled, it works fine!
But when I enable the volumes it fails.
I tried all official documentations
The VM has shared folder as required and has full access to it also
shared folder screenshot
In case of my example of postgresql it crashes with following log
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... LOG: could not link file "pg_xlog/xlogtemp.27" to "pg_xlog/000000010000000000000001": Operation not permitted
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
I know its the problem with folder permissions. But kinda stuck!
A ton of thanks in advance
I've been busy with this problem all day and my conclusion that it's currently simply not possible to run postgresql inside a docker container while keeping your data persistent in a separate volume.
I even tried running the container without linking to a volume and copying the data that was originally in /var/lib/postgresql into a folder of my host OS (Windows 10 Home), then copy that into the folder that got then linked to the container itself.
Alas, I got the next error:
FATAL: data directory "/var/lib/postgresql/data/pgadmin" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
In conclusion: There's something going wrong with the ownership and the correct user owning it and to be able to fix it, you'll need a unix commandline on Windows that is able to run docker (something currently not possible with Bash on Ubuntu on Windows that is running using Ubuntu 16.04 binaries).
Maybe, in the future, you'll be able to run the needed commands (found here, under Arbitrary --user Notes), but these are *nix commands and powershell (started by Kitematic) can't run those. Bash for Ubuntu for Windows could run those, but that shell has no connection to the docker daemon/service on windows...
TL;DR: Lost a day of work: It is currently impossible on Windows.
I have been trying to fix this issue also ..
At first I thought it was a symlink problem (because the first error fails on " could not link .. operation not permitted)
To be sure symlink is permitted you have to :
share a folder in virtualbox
run virtualbox as administrator (if you account is in administrator group) Right click virtualbox.exe and select run as Administrator
if your account is not administrator, add the symlink privilege with secpol.msc > "Local Policies-User Rights Assignments" add your user to "Create symbolic links"
enable symlink for your shared folder in virtualbox :
VBoxManage setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARED_FOLDER_NAME 1
Alternatively you can also use the c:\User\username folder which is shared and symlink enabled by default dockertools installation
Now I can create symlinks in the shared folder from the docker container .. but I still have the same error "could not link ... operation not permitted"
So the reason must be somewhere else ... in the file permissions as you said but I do not see why ?

docker-compose caches run results

I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.
The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker

Resources