Error while sharing local drive(volume) with docker for windows - docker

I am getting below error when I try to share local drive(volume) with docker for windows
docker run --rm -v c:/Users:/data alpine ls /data
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: C: drive is not shared. Please shar
e it in Docker for Windows Settings.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I tried sharing the folder from the docker settings and provided my username and password but no luck and getting same error

I had a similar issue with the error message "docker: Error response from daemon: Drive sharing failed for an unknown reason."
I opened the docker settings > Shared Drives > checked on C drive > Click Apply
and re-started docker to resolve the issue

I was facing a similar issue when starting containers with docker-compose. I got an error:
A firewall is blocking file sharing between Windows and the containers.
Then I checked settings for Docker and under Shared Drives section I tried to check the checkbox for C: drive, but after hitting apply checkbox unchecked itself.
Then I copied the line docker run --rm -v c:/Users:/data alpine ls /data into Powershell, ran it and got the error:
Drive sharing failed for an unknown reason.
But after this error, I decided to just try restarting Docker. After the restart, I tried to check the checkbox in the Shared Drives section once again and now it stayed checked and everything is working as it should.
I was using Docker Stable version.

At the moment at Creators Update (1703) Samba shares are not working. There are a lot of tickets in official repo:
For example: #662, #669, #756
There is workaround described here:
The same started happening for me after installing the Win 10 Creators Update (Build 15063)
Firewall rules are not the issue, those are correct
for some reason, after a reboot, I cannot access any local SMB shares on the DockerNAT interface (10.0.75.1)
I am able to fix this temporarily by disabling and & re-enabling the"File and Printer Sharing for Microsoft Networks" component on the virtual "DockerNAT" network interface.
afterwards, I am able to browse \10.0.75.1
disable & re-enable the shared drive in Docker settings and it works - until next reboot

I was dealing with similar issue during setup.
I couldn't share my directories with Docker because I was using my Azure AD login credentials. You need to create a local admin user. If the local admin doesn't appears right away you would need to reinstall Docker under local admin user.
I hope this helps someone struggling with similar issue.

Like Shweta Gupta was saying, if you are using an AzureAD user, you'll need to create a local account on your machine, and use that to give Docker permissions to the drive.

Related

Firewall detected while mounting volume in docker

I am trying to mount volume using share drive option on Docker-Desktop but each time, when I am click on Drive to share it, I am getting below error:
My docker version is : 2.1.0.5
My System : Windows 10
I am on my office system and connected to internet using VPN. I have disconnected my VPN and tried to connect to internet directly, still I am getting this error. I don't have full access to modify settings on my laptop. I really need mount option to share some file between local machine and container and I am not able to do it. Could you please help me to resolve this issue or any workaround that I could try to mount my local files to container without sharing Drive?
You want to upgrade to a 2.2.x.x release of docker desktop or newer. In that release they updated the file sharing to remove the samba based mounts.
Users don’t have to expose the Samba port, and therefore do not experience issues related to IT firewall or drive-sharing policy.
There were a few issues in the first few releases, so be sure to use the latest patch.

Unable to share C drive for Docker Linux Container on Windows 10

A lot of people has been facing similar issues based on the several links that i checked, however, non of the solutions i have checked work for me:
Add Local account
http://peterjohnlightfoot.com/docker-for-windows-on-hyper-v-fix-the-host-volume-sharing-issue/
Disable firewall
https://github.com/docker/for-win/issues/1381
Weird characters on the passwords.
Tried to mount on D drive (external hard drive).
Could there be an apparent issue with my docker installation ? I also tried to uninstall, and re-install. I have tried to use both docker stable or edge version, same problem persisted.
On the top of that, i realized my docker is not giving me enough info to figure things out. The log that i have in my path %AppData%\Local\Docker shows only the following:
Is there any other place that i can check for my docker log ?
Added my docker-compose.yml file :docker-compose.yml
Added my docker file. DockerFile
Added my startup.sh file. startup.sh

Unable to share C drive on Docker for Windows

I am running Docker Desktop for Windows on Windows 10 Enterprise. I get the following:
PS C:\Users> docker run --rm -v c:/Users:/data alpine ls /data
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error
response from daemon: C: drive is not share it in Docker for Windows
Settings.
From Docker settings in the Shared Drives tab, I see that the C drive is there, but it is not checked. When I check it and press Apply, I am prompted for my password. Upon entering it successfully, the C drive is still not checked.
There are different problems that people face with sharing. But the common one is a non-english character based password or a password with spaces.
If you can change your password and remove spaces/special non-english characters then it should work.
Other workaround that you can try is create a local user and give it access to C: and then when sharing C:\ in docker settings, using this local user credentials
The user account supplied also needs to have admin permission. Seems obvious but Docker doesn't return an error message when it fails (Version 18.06.1-ce-win73 (19507)). Remember to subsequently run PowerShell as that admin account in order to access share.

Docker Compose stuck downloading or pulling fs layer

I have the latest Docker for Mac installed, and I'm running into a problem where it appears that docker-compose up is stuck in a Downloading state for one of the containers:
± |master ✗| → docker-compose up --build
Pulling container (repo.io/company/container:prod)...
prod: Pulling from company/container
somehash: Already exists
somehash: Already exists
somehash: Already exists
somehash: Already exists
somehash: Pulling fs layer
somehash: Already exists
somehash: Already exists
somehash: Downloading [=================================================> ] 234.6 MB/239.3 MB
somehash: Download complete
somehash: Download complete
^^ this is literally what it looks like on my command line. Stopping and starting hasn't helped, it immediately outputs this same output.
I've tried to rm the container but I guess it doesn't yet exist, it returns the output No stopped containers. --force-recreate also gets stuck in the same place. And perhaps I'm not googling for the right terminology but I haven't found anything useful to try - any pointers?
I just needed to restart Docker.
Linux users can use sudo service docker restart.
Docker for Mac has a handy button for this in the Docker widget in the macOS toolbar:
If you happen to be using Docker Toolkit try docker-machine restart.
I faced the same problem! Restarting the service didn't help, downloading again didn't help. It used to get stuck at random instances leaving me with no option but to kill the pull request.
One thing which worked for me was to download 1 file at a time. For Ubuntu users, you can use the following steps:
Stop the service:
sudo service docker stop
Start docker with max concurrent download set as 1:
sudo dockerd --max-concurrent-downloads 1
Download the required image:
sudo docker pull <image_name>
Download images, after that stop the terminal and start the daemon again as it was earlier.
sudo service docker start
I had the similar situation this morning where my network suddenly went down and I was forced to power cycle the modern, while docker-compose was still in the middle of downloading stuff from docker hub.
Yes, bouncing the docker daemon process seems to resolve this.
For Linux users - do sudo service docker restart to fix it.
Go to the Docker Preferences from its menu bar icon. Within there is a "bug" icon. Click on that and then "clean / Purge data"
I'm running OSX and restarting Docker for Mac didn't help. Neither did a full restart or upgrading VirtualBox. What did work was turning my wifi interface on and off every time it got stuck. I had to do this repeatedly, but it eventually downloaded the entire image.
Directly download the necessary images using docker, e.g.
docker pull company/container
and then run
docker-compose up
again. Worked for me on MacOS.
I found a possible workaround.
I have my docker engine installed in a Ubuntu 18.04 Snap Environment.
I discovered searching in some forums that users relate this behaviors to limitation in the download bandwith.
So in the picture below you are going to watch that the components was stucked
Part of the Downloads stucked and finally I cancelled the process CTRL + C
I added two parameters or flags in the configuration file that controls the docker daemon behavior: max-concurrent-downloads 1 and max-concurrent-uploads 1
In my case remember, i am working in a snap environment. This file is located in this directory: /var/lib/docker/current/config/daemon.json
REMEMBER TO STOP ALL DOCKER PROCESS BEFORE THE FILE MODIFICATION, AND CREATE BACKUP OF THE FILE
Add the two lines in the picture. This is going to help you to limit the downloads to only one by one
This is the process that helped me to resolve this problem.
Download Succesfull
I had this issue in my VirtualBox when doing a docker pull on the image but it got stuck at a specific position and never moved from there. So, the issue was due to the network adapters in my VM. I was using NAT by default. When I switched it to "Bridged adapter", the issue went away.
I had a similar problem on docker for windows for a couple of days and when I tried to connect to the virtual machine (via Hyper-V Manager) the downloads started speeding along. I have no idea why but it worked for me...
Completely remove docker
Install docker again
It should work now
I tried to restar docker, update docker, but didnt help

Google Container Registry access denied when pushing docker container

I try to push my docker container to the google container registry, using this tutorial, but when I run
gcloud docker push b.gcr.io/my-bucket/image-name
I get the error :
The push refers to a repository [b.gcr.io/my-bucket/my-image] (len: 1)
Sending image list
Error: Status 403 trying to push repository my-bucket/my-image: "Access denied."
I couldn't find any more explanation (no -D, --debug, --verbose arguments were recognized), gcloud auth list and docker info tell me I'm connected to both services.
Anything I'm missing ?
You need to make sure the VM instance has enough access rights. You can set these at the time of creating the instance, or if you have already created the instance, you can also edit it (but first, you'll need to stop the instance). There are two ways to manage this access:
Option 1
Under the Identity and API access, select Allow full access to all Cloud APIs.
Option 2 (recommended)
Under the Identity and API access, select Set access for each API and then choose Read Write for Storage.
Note that you can also change these settings even after you have already created the instance. To do this, you'll first need to stop the instance, and then edit the configuration as mentioned above.
Use gsutil to check the ACL to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You'll need to check which group the account you are using is in ('owners', 'editors', 'viewers' etc.)
EDIT: I have experienced a very similar problem to this myself recently and, as #lampis mentions in his post, it's because the correct permission scopes were not set when I created the VM I was trying to push the image from. Unfortunately there's currently no way of changing the scopes once a VM has been created, so you have to delete the VM (making sure the disks are set to auto-delete!) and recreate the VM with the correct scopes ('compute-rw', 'storage-rw' seems sufficient). It doesn't take long though ;-).
See the --scopes section here: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
I am seeing this but on an intermittent basis. e.g. I may get the error denied: Permission denied for "latest" from request "/v2/...."., but when trying again it will work.
Is anyone else experiencing this?
For me I forgot to prepend gcloud in the line (and I was wondering how docker would authenticate):
$ gcloud docker push <image>
In your terminal, run the code below
$ sudo docker login -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://[HOSTNAME]
Where
-[HOSTNAME] is your container registry location (it is either gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io). Check your tagged images to be sure by running $ sudo docker images).
If this doesn't fix it, try reviewing the VM's access scopes.
If you are using Docker 1.7.0, there was a breaking change to how they handle authentication, which affects users who are using a mix of gcloud docker and docker login.
Be sure you are using the latest version of gcloud via: gcloud components update.
So far this seems to affect gcloud docker, docker-compose and other tools that were reading/writing the Docker auth file.
Hopefully this helps.
Same problem here, the troubleshooting section from https://cloud.google.com/tools/container-registry/#access_denied wasn't very helpful. I have Docker and GCloud full updated. Don't know what else to do.
BTW, I'm trying to push to "gcr.io".
Fixed. I was using a VM in compute engine as my development machine, and looks like I didn't give it enough rigths in Storage.
I had the same problem with access denied and I resolved it with creating new image using Tag:
docker tag IMAGE_WITH_ACCESS_DENIED gcr.io/my-project/my-new-image:test
After that I could PUSH It to Container registry:
gcloud docker -- push gcr.io/my-project/my-new-image:test
Today I also got this error inside Jenkins running on Google Kubernetes Engine when pushing the docker container. The reason was a node pool node version upgrade from 1.9.6-gke.1 to 1.9.7-gke.0 in gcp I did before. Worked again after the downgrade.
You need to login to gcloud from the machine you are:
gcloud auth login

Resources