Unable to share C drive for Docker Linux Container on Windows 10 - docker

A lot of people has been facing similar issues based on the several links that i checked, however, non of the solutions i have checked work for me:
Add Local account
http://peterjohnlightfoot.com/docker-for-windows-on-hyper-v-fix-the-host-volume-sharing-issue/
Disable firewall
https://github.com/docker/for-win/issues/1381
Weird characters on the passwords.
Tried to mount on D drive (external hard drive).
Could there be an apparent issue with my docker installation ? I also tried to uninstall, and re-install. I have tried to use both docker stable or edge version, same problem persisted.
On the top of that, i realized my docker is not giving me enough info to figure things out. The log that i have in my path %AppData%\Local\Docker shows only the following:
Is there any other place that i can check for my docker log ?
Added my docker-compose.yml file :docker-compose.yml
Added my docker file. DockerFile
Added my startup.sh file. startup.sh

Related

no configuration file provided: not found for docker compose up --scale chrome=5

This might look similar to this existing solution but I have tried all the solutions mentioned there and none seems to resolve my issue.
I have created a docker compose file , Docker-Compose-V3.yml.
On running docker-compose -f docker-compose-v3.yml up I am able to successfully spin up the grid network.
When I am trying to scale my chrome nodes using docker-compose up --scale chrome = 5 I am getting no configuration file provided: not found error message
I have tried following solutions from the existing answer linked but to no avail
Made sure I am in correct directory where the docker compose file is present
Checked the extension of yml file and cross checked the folder option settings
I am unable to understand that why is docker able to identify the compose file when asked to spin up the grid but fails to do so when I am trying to scale up the services.
I know that another possible way I can do it is using docker swarm which I came to know of while scrolling through docker documentation but would like to understand why this isn't working.
Needless to say but I have just started exploring the docker world and would appreciate any help in being pointed towards the existing documentation/answers that would resolve my problem

How to mount existing directory on Windows Host in Docker Portainer?

I have docker-compose.yml
volumes:
- D:/Docker/config:/config
- D:/Downloads:/downloads
I can do this with docker-compose up without any issue
But in portainer stack, I got an error
Deployment error
failed to deploy a stack: Named volume "D:/Docker/config:/config" is used in service "test" but no declaration was found in the volumes section. : exit status 1
Basically I want to map my host folder D:/Docker/config. How do I do this in portainer?
I spent a long time figuring this out
So in Docker-Compose, on Windows running Docker Desktop in WSL2 mode when entering a mount point for a Bind Mount you have to format it
- /mnt/DRIVE-LETTER/directory/to/location:/container/path
An example would be
- /mnt/k/docker/tv/xteve/config:/home/xteve/config
You also have the option of using relative paths from where the Compose file is located but with Portainer that isn't an option. I know, I tried everything I could think of. Then I was looking at tutorials & I saw the same thing #warheat1990 posted here & experimented with that.
Portainer tells you to paste your Docker-Compose, but the paths are different. The paths inside Portainer will not work & be ignored or placed somewhere you can't get to them from windows, unless you remove the "/mnt" & start with the drive letter
- /DRIVE-LETTER/directory/to/location:/container/path
An example would be
- /k/docker/tv/xteve/config:/home/xteve/config
I tested it & outside of Portainer without the "/mnt" it fails, but within Portainer it can't be there. So far I'm fairly confident that there is no way to do it that works for both. Which is super annoying because Portainer makes it easy to paste your Compose or actually import the file, but then you must edit it, just nobody tells you that...
Hope that helps
use /d/Downloads to make it work, thanks to #xerx593
edit : came here about the same question, and forgot the question was for portainer. This answer below worked in docker compose in windows.
Not sure how helpful that is but as I was trying to deploy Filerun in docker desktop windows, I ran into that issue myself.
Goal was to have my containers data in one place on a windows folder but Filerun files were a different drive.
I fumbled A LOT but what worked for me in the end with docker compose :
volumes:
- D:\myfolder:/mnt/myfolder
- ./filerun/html:/var/www/html
- ./filerun/user-files:/user-files # (default, could be pointed directly to mnt)
The ./filerun define a relative path to where the docker compose yml is stored and executed so that I have persistence in my windows folder there even when removing a compose (this is not intended for intensive prodiction stuff).
And my files were accessible in Filerun through /mnt/myfolder.
Not sure if docker desktop has been updated on that since the previous answer was given.

Accessing Files on a Windows Docker Container Easily

Summary
So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.
Background (what I've tried):
I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end user's perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.
EDIT: Using OpenSSH
Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:
ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa
And modifying the permissions using:
icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T
and
icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T
Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.
I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a volume is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".
EDIT: Command used:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container
Running this on a ProxMox VM.
At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.
Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.
So after a fair bit more troubleshooting, I was unable to get the docker volume to initialize on an already populated folder, even though the documentation suggests it should be possible.
So, I instead decided to try to start the container with the volume linked to an empty folder, and then start the installation script for the program after the container is running, so the folder populates after the volume is already linked. This worked perfectly! There's a bit of weirdness if you leave the files in the volume and then try to restart the container, as it will overwrite most of the files, but things like logs and files not created by the installer will remain, so we'll have to figure out some process for managing that, but it works just like I need it to, and then I can use windows sharing to access that volume folder from anywhere on the network.
Here's how I got it working, it's actually very simple.
So in my dockerfile, I added a batch script that unzips the installation DVD that is copied to the container, and runs the installer after extracting. I then used the CMD option to run this on container start:
Dockerfile
FROM microsoft/windowsservercore
ADD DVD.zip C:\\resources\\DVD.zip
ADD config.bat C:\\resources\\config.bat
CMD "C:\resources\config.bat" && cmd
Then I build the container without anything special:
docker build -t my_container:latest .
And run it with the attachment to the volume:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination="C:/Program Files (x86)/{PROGRAM NAME}" my_container
And that's it. Unfortunately, the container takes a little longer to start (it does build faster though, for what that's worth, as it isn't running the installer in the build), and the program isn't installed/running for another 5 minutes or so after the container does start, but it works!
I can provide more details if anyone needs them, but most of the rest is implementation specific and fairly straightforward.
Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/
Dockerfile
FROM microsoft/windowsservercore
# need to ecape \
WORKDIR C:\\tmp\\
# Add the program from host machine to container
ADD ["<source>", "C:\tmp"]
# Normally used with web servers
# EXPOSE 80
# Running the program
CMD ["C:\tmp\program.exe", "any-parameter"]
Docker Composer
Should ideally be in the parent folder.
version: "3"
services:
windows:
build: ./folder-of-Dockerfile
volume:
- type: bind
source: ./my_volume
target: C:/tmp/
ports:
- 9999:9092
Folder structure
|---docker-composer.yml
|
|---folder-of-Dockerfile
|
|---Dockerfile
Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.
Useful link Manage Windows Dockerfile

Docker-compose volumes not mounted correctly in VirtualBox under Windows

I am trying to run Hyperledger's BYFN Tutorial on a Win10 Home using Docker Toolbox, with VirtualBox 5.2.4. I am using the default image for the VirtualBox VM.
I have set up a shared folder (not in C:/Users, but on my other drive) and it seems to be functioning correctly - changes I make from either Windows, or the docker-machine are reflected in both places as intended. I successfully generate the network artifacts using "./byfn -m generate", but I get an error when trying to "./byfn up" it.
What happens is that, as far as I can see from the logs, all the containers get brought up correctly, but for some reason the volumes of the cli container are not attached correctly (I think). When byfn.sh finishes I get the following error:
When I ssh into the cli container, I can see the channel-artifacts, crypto and scripts folders, but their contents don't seem to correlate with the volumes: part of the docker-compose file. First, the scripts folder is empty (whereas in the docker-compose file it's specified that it should mount a bunch of files), so I get the above error. Second, the channel-artifacts containes only 1 directory named genesis.block, which should actually be a file. And in the crypto folder there are just a bunch of directories.
As you might have guessed, I'm pretty new at docker, so this might be intended behavior, but I'm still getting an error.
Please let me know if I can provide additional information. Thanks in advance.

Error while sharing local drive(volume) with docker for windows

I am getting below error when I try to share local drive(volume) with docker for windows
docker run --rm -v c:/Users:/data alpine ls /data
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: C: drive is not shared. Please shar
e it in Docker for Windows Settings.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I tried sharing the folder from the docker settings and provided my username and password but no luck and getting same error
I had a similar issue with the error message "docker: Error response from daemon: Drive sharing failed for an unknown reason."
I opened the docker settings > Shared Drives > checked on C drive > Click Apply
and re-started docker to resolve the issue
I was facing a similar issue when starting containers with docker-compose. I got an error:
A firewall is blocking file sharing between Windows and the containers.
Then I checked settings for Docker and under Shared Drives section I tried to check the checkbox for C: drive, but after hitting apply checkbox unchecked itself.
Then I copied the line docker run --rm -v c:/Users:/data alpine ls /data into Powershell, ran it and got the error:
Drive sharing failed for an unknown reason.
But after this error, I decided to just try restarting Docker. After the restart, I tried to check the checkbox in the Shared Drives section once again and now it stayed checked and everything is working as it should.
I was using Docker Stable version.
At the moment at Creators Update (1703) Samba shares are not working. There are a lot of tickets in official repo:
For example: #662, #669, #756
There is workaround described here:
The same started happening for me after installing the Win 10 Creators Update (Build 15063)
Firewall rules are not the issue, those are correct
for some reason, after a reboot, I cannot access any local SMB shares on the DockerNAT interface (10.0.75.1)
I am able to fix this temporarily by disabling and & re-enabling the"File and Printer Sharing for Microsoft Networks" component on the virtual "DockerNAT" network interface.
afterwards, I am able to browse \10.0.75.1
disable & re-enable the shared drive in Docker settings and it works - until next reboot
I was dealing with similar issue during setup.
I couldn't share my directories with Docker because I was using my Azure AD login credentials. You need to create a local admin user. If the local admin doesn't appears right away you would need to reinstall Docker under local admin user.
I hope this helps someone struggling with similar issue.
Like Shweta Gupta was saying, if you are using an AzureAD user, you'll need to create a local account on your machine, and use that to give Docker permissions to the drive.

Resources