I ran normally from the docker hub to running the image.
But I'm having trouble executing 'build' command.
There appears to be a path problem, but any path given in the current working directory cannot be resolved. Can you give me any advice here?
(I am in wsl2 - ubuntu 20.04)
You don't need to pass the file name to build command, if the file name is Dockerfile just pass the context. use this instead:
docker build -t python-test .
I am trying to run the puppet pupperware suite (all 3 servers/puppet server/puppet DB/DB server).
I am using the official Yaml file provided by puppetlabs for docker compose : https://github.com/puppetlabs/pupperware/blob/master/docker-compose.yml
When I run that Yaml file in docker compose however, I am running into the following error (from docker-compose logs):
postgres_1 | ls: cannot open directory '/docker-entrypoint-initdb.d/': Permission denied
And as a result, the build fails (only the puppet server comes up, but not the other ones).
My docker host is a Fedora 33 virtual machine running inside a Proxmox environment. Proxmox runs on the physical host.
I have disabled SELinux, and I am running docker (moby) rootless. My local user (uid 1000) can run docker without sudo.
I believe I need to set permission in the container (probably via a Dockerfile) but I am not sure how to change that and I am not sure how to use a Dockerfile and docker-compose simultaneously.
thank you for your help
The docker-compose file is from the Puppet 6 era. The docker images that the Pupperware setup currently pulls, are latest, which is Puppet 7.
I got my pre-existing setup functioning again by changing the image names to:
puppet/puppetserver:6.14.1
postgres:9.6
puppet/puppetdb:6.13.1
Maybe this works for you as well.
well, since it's been a month and you have no answers I will tell try to help you with what I know.
You should put a Dockerfile in the root of your project. It contains commands to be run by the docker daemon AND the commands run by the linux inside the container. Then it runs through the contents of your docker-compose.yml and runs the commands in there.
So to solve the permission problem you should add RUN, which executes the linux command in Bash and add data to the folder.
Also look at this answer
I have ran into this problem when opening the project in container.
Setting up container for folder or workspace: c:\Work\playground\moodle\lms_administrace
Run: docker-compose -f c:\Work\playground\moodle\lms_administrace\docker\docker-compose-dev.yml config --services
app
redis
db
phpmyadmin
Run: docker-compose --project-name docker -f c:\Work\playground\moodle\lms_administrace\docker\docker-compose-dev.yml up -d --build
Creating volume "docker_mysql_data_volume" with default driver
Pulling app (nodejs:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]
The problem is that I cannot press y or N. I know why I'm having this problem - because I have used that docker compose file before and containers and volumes were created with the directory prefix (docker).
There's a way how to change the compose project name through .env file, but it does not work (I put the file in the root directory, in the directory where compose file is, and in the .devcontainer folder). And also there is -p parameter, but the MS GitHub page does not provide any information.
I can probably fix it by renaming everything, but this may be a serious issue since you can't continue in the process ...
Did anybody experienced similar problem and fixed that?
Thanks,
Karel
You probably mistyped service docker image name in docker-compose.yml.
You are trying to pull nodejs image instead of node
Also, there is could be same error with case postgresql and postgres.
I had the same problem,My problem is using the wrong mirror name.
Why is Docker trying to create the folder that I'm mounting? If I cd to C:\Users\szx\Projects
docker run --rm -it -v "${PWD}:/src" ubuntu /bin/bash
This command exits with the following error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: error while creating mount source path '/c/Users/szx/Projects': mkdir /c/Users/szx/Projects: file exists.
I'm using Docker Toolbox on Windows 10 Home.
For anyone running mac/osx and encountering this, I restarted docker desktop in order to resolve this issue.
Edit: It would appear this also fixes the issue on Windows 10
My trouble was a fuse-mounted volume (e.g. sshfs, etc.) that got mounted again into the container. I didn't help that the fuse-mount had the same ownership as the user inside the container.
I assume the underlying problem is that the docker/root supervising process needs to get a hold of the fuse-mount as well when setting up the container.
Eventually it helped to mount the fuse volume with the allow_other option. Be aware that this opens access to any user. Better might be allow_root – not tested, as blocked for other reasons.
I got this error after changing my Windows password. I had to go into Docker settings and do "Reset credentials" under "Shared Drives", then restart Docker.
Make sure the folder is being shared with the docker embedded VM. This differs with the various types of docker for desktop installs. With toolbox, I believe you can find the shared folders in the VirtualBox configuration. You should also note that these directories are case sensitive. One way to debug is to try:
docker run --rm -it -v "/:/host" ubuntu /bin/bash
And see what the filesystem looks like under "/host".
I have encountered this problem on Docker (Windows) after upgrading to 2.2.0.0 (42247). The issue was with casing in the folder name that I've provided in my arguments to docker command.
Did you use this container before? You could try to remove all the docker-volumes before re-executing your command.
docker volume rm `(docker volume ls -qf dangling=true)`
I tried your command locally (MacOS) without any error.
I met this problem too.
I used to run the following command to share the folder with container
docker run ... -v c:/seleniumplus:/dev/seleniumplus ...
But it cannot work anymore.
I am using the Windows 10 as host.
My docker has recently been upgraded to "19.03.5 build 633a0e".
I did change my windows password recently.
I followed the instructions to re-share the "C" drive, and restarted the docker and even restarted the computer, but it didn't work :-(.
All of sudden, I found that the folder is "C:\SeleniumPlus" in the file explorer, so I ran
docker run ... -v C:/SeleniumPlus:/dev/seleniumplus ...
And it did work. So it is case-sensitive when we specify the windows shared folder in the latest docker ("19.03.5 build 633a0e").
I am working in Linux (WSL2 under Windows, to be more precise) and my problem was that there existed a symlink for that folder on my host:
# docker run --rm -it -v /etc/localtime:/etc/localtime ...
docker: Error response from daemon: mkdir /etc/localtime: file exists.
# ls -al /etc/localtime
lrwxrwxrwx 1 root root 25 May 23 2019 /etc/localtime -> ../usr/share/zoneinfo/UTC
It worked for me to bind mount the source /usr/share/zoneinfo/UTC instead.
I had this issue when I was working with Docker in a CryFS -encrypted directory in Ubuntu 20.04 LTS. The same probably happens in other UNIX-like OS-es.
The problem was that by default the CryFS-mounted virtual directory is not accessible by root, but Docker runs as root. The solution is to enable root access for FUSE-mounted volumes by editing /etc/fuse.conf: just comment out the use_allow_other setting in it. Then mount the encrypted directory with the command cryfs <secretdir> <opendir> -o allow_root (where <secretdir> and <opendir> are the encrypted directory and the FUSE mount point for the decrypted virtual directory, respectively).
Credits to the author of this comment on GitHub for calling my attention to the -o allow_root option.
Had the exact error. In my case, I used c instead of C when changing into my directory.
I solved this by restarting docker and rebuilding the images.
I have put the user_allow_other in /etc/fuse.conf.
Then mounting as in the example below has solved the problem.
$ sshfs -o allow_other user#remote_server:/directory/
I had this issue in WSL, likely caused by leaving some containers alive too long. None of the advice here worked for me. Finally, based on this blog post, I managed to fix it with the following commands, which wipe all the volumes completely to start fresh.
docker-compose down
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up
Then, I restarted WSL (wsl --shutdown), restarted docker desktop, and tried my command again.
In case you work with a separate Windows user, with which you share the volume (C: usually): you need to make sure it has access to the folders you are working with -- including their parents, up to your home directory.
Also make sure that EFS (Encrypting File System) is disabled for the shared folders.
See also my answer here.
I had the same issue when developing using docker. After I moved the project folder locally, Docker could not mount files that were listed with relatives paths, and tried to make directories instead.
Pruning docker volumes / images / containers did not solve the issue. A simple restart of docker-desktop did the job.
This error crept up for me because the problem was that my docker-compose file was looking for the APPDATA path on my machine on mac OS. MacOS doesn't have an APPDATA environment variable so I just created a .env file with the contents:
APPDATA=~/Library/
And my problem was solved.
I faced this error when another running container was already using folder that is being mounted in docker run command. Please check for the same & if not needed then stop the container. Best solution is to use volume by using following command -
docker volume create
then Mount this created volume if required to be used by multiple containers..
For anyone having this issue in linux based os, try to remount your remote folders which are used by docker image. This helped me in ubuntu:
sudo mount -a
I am running docker desktop(docker engine v20.10.5) on Windows 10 and faced similar error. I went ahead and removed the existing image from docker-desktop UI, deleted the folder in question(for me deleting the folder was an option because i was just doing some local testing), removed the existing container, restarted the docker and it worked
In my case my volume path (in a .env file for docker-compose) had a space in it
/Volumes/some\ thing/folder
which did work on Docker 3 but didn't after updating to Docker 4. So I had to set my env variable to :
"/Volumes/some thing/folder"
I had this problem when the directory on my host was inside a directory mounted with gocryptfs. By default even root can't see the directory mounted by gocryptfs, only the user who executed the gocryptfs command can. To fix this add user_allow_other to /etc/fuse.conf and use the -allow_other flag e.g. gocryptfs -allow_other encrypted mnt
In my specific instance, Windows couldn't tell me who owned my SSL certs (probably docker). I took control of the SSL certs again under Properties, added read permission for docker-users and my user, and it seemed to have fixed the problem. After tearing my hair out for 3 days with just the Daemon: Access Denied error, I finally got a meaningful error regarding another answer above "mkdir failed" or whataever on a mounted file (the SSL cert).
I'm using Docker Toolbox to run docker containers on Windows. When I try to perform docker-compose up Docker cannot find docker-compose.yml. I must cd into container's directory or specify the file path using -f argument. How to get the path?
docker info shows Root Dir as /mtn/sda1/bla/bla which is virtual path and doesn't exist on my PC.
UPD: solved
You may have simply forgot to save the docker-compose.yml. check this first. Try this solution, it worked for me https://github.com/docker/compose/issues/129
hope you get it sorted