Run docker-compose on Windows 7 - docker

I'm using Docker Toolbox to run docker containers on Windows. When I try to perform docker-compose up Docker cannot find docker-compose.yml. I must cd into container's directory or specify the file path using -f argument. How to get the path?
docker info shows Root Dir as /mtn/sda1/bla/bla which is virtual path and doesn't exist on my PC.
UPD: solved

You may have simply forgot to save the docker-compose.yml. check this first. Try this solution, it worked for me https://github.com/docker/compose/issues/129
hope you get it sorted

Related

Docker container copy files from local path into container

I need to copy my customized keycloak themes into keycloak container to use it like mention here:
https://medium.com/#auscunningham/change-login-theme-in-keycloak-docker-image-55b5fa5ceec4
After identifying my container id: docker container ls and making a list of files like this: docker exec 7e3a420017a8 ls ./keycloak/themes
It returns the list of themes correctly, but using this to copy my files from local to container:
docker cp ./mycustomthem 7e3a420017a8:/keycloak/themes/
or
docker cp ./mycustomthem 7e3a420017a8:./keycloak/themes/
I get the following error:
Error: No such container:path: 7e3a420017a8:/keycloak
I cannot imagine where the error is, since I can list the files into the folder and container, could you help me?
Thank you in advance.
Works on my computer.
docker cp mycustomthem e67f76e8740b:/opt/jboss/keycloak/themes/raincatcher-theme
You have added the wrong path in command add full path /opt/jboss/keycloak/themes/raincatcher-theme.
This seems like a weird way to approach this problem. Why not just have a Dockerfile that uses the Keycloak container as the base image and then copies the theme into the container at build time? Then just run the image you build? This will also be a more stable pattern in the long term if you ever decide to add any plugins or customizations and it provides an easy upgrade path to new versions by just changing the base image in your Dockerfile.
Update according to your new question update:
Try the following:
docker cp ./mycustomthem 7e3a420017a8:/opt/jboss/keycloak/themes/
The correct path in Keycloak is actually /opt/jboss/keycloak/themes/

Where is the data in docker container stored in docker?

Similar question: mac image path
In mac, when I run docker inspect containerID
I see most of the stuff is coming from /var/lib/docker/
however, this path neither exists in the host (mac) nor the docker container.
where is this path refer to?
you can find your files in container:path and use the docker commands to copy them to your local machine and vice versa (I'm assuming you are trying to move files e.g. from your local machine to your container). I had the same exact issue you mentioned but I manage to move files with
docker cp local_path containerID:target_path
to see your container_ID simply run docker ps -a, it should show it even if unmounted.
See https://docs.docker.com/engine/reference/commandline/cp/

Docker-compose can't find suitable file, even though it exists

So I am running ubuntu 18.04 lts on windows 10 through hyper-v and I'm trying to run the docker compose command through the terminal. When inside the docker folder and I run ls, it says there is a docker-compose.yml file. Still when I run docker compose command, it says no suitable configuration file is found.
docker-compose up -d
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
I'm using docker version 18.09.0 and docker compose version 1.22.0
On Ubuntu Ubuntu 18.04.2 LTS I was facing the same issue. I don't know the exact reason but Docker and Docker Compose installed with snap was not working.
sudo snap remove docker && sudo snap remove docker-compose
Installed docker from Officials Docs here and compose from here via apt and and now my Docker Compose file start working
Found out the problem was using shared folder functionality with Hyper V and windows 10. For some reason docker won't work when the docker-compose.yml file is located inside that, however when I move it outside of the shared(e.g. home directory) folder it does work. So if I want to use the docker file I have to place it outside the shared folder to make it work. Rest of the project runs as expected, so a little workaround ......
Was getting the same error on Ubuntu 20.04. I could not use apt to install docker-compose (Forbidden errors on apt's sources.list and out of my control) so had to install docker-compose with snap
sudo snap install docker
That is a "caveat" of docker installed with snap. From docker-snap README: "All files that docker needs access to should live within your $HOME folder". So that is expected behavior.
Move your project to $HOME folder. For example, I had my project in /usr/local/src/my_project and had to move to ~/some_folder/my_project and then could run docker-compose.
It will be a bit stupid answer I believe but as I am new to using docker desktop on windows. I was trying to run a script file using Ubuntu distro 20.04 LTS. the path I was providing was not correct. because windows user will use path like this in my case C:\Users\usera\Desktop\Terminology-service\ols but here when I changed it to /mnt/c/Users/usera/Desktop/Terminology-service/ols it worked.

Docker tries to mkdir the folder that I mount

Why is Docker trying to create the folder that I'm mounting? If I cd to C:\Users\szx\Projects
docker run --rm -it -v "${PWD}:/src" ubuntu /bin/bash
This command exits with the following error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: error while creating mount source path '/c/Users/szx/Projects': mkdir /c/Users/szx/Projects: file exists.
I'm using Docker Toolbox on Windows 10 Home.
For anyone running mac/osx and encountering this, I restarted docker desktop in order to resolve this issue.
Edit: It would appear this also fixes the issue on Windows 10
My trouble was a fuse-mounted volume (e.g. sshfs, etc.) that got mounted again into the container. I didn't help that the fuse-mount had the same ownership as the user inside the container.
I assume the underlying problem is that the docker/root supervising process needs to get a hold of the fuse-mount as well when setting up the container.
Eventually it helped to mount the fuse volume with the allow_other option. Be aware that this opens access to any user. Better might be allow_root – not tested, as blocked for other reasons.
I got this error after changing my Windows password. I had to go into Docker settings and do "Reset credentials" under "Shared Drives", then restart Docker.
Make sure the folder is being shared with the docker embedded VM. This differs with the various types of docker for desktop installs. With toolbox, I believe you can find the shared folders in the VirtualBox configuration. You should also note that these directories are case sensitive. One way to debug is to try:
docker run --rm -it -v "/:/host" ubuntu /bin/bash
And see what the filesystem looks like under "/host".
I have encountered this problem on Docker (Windows) after upgrading to 2.2.0.0 (42247). The issue was with casing in the folder name that I've provided in my arguments to docker command.
Did you use this container before? You could try to remove all the docker-volumes before re-executing your command.
docker volume rm `(docker volume ls -qf dangling=true)`
I tried your command locally (MacOS) without any error.
I met this problem too.
I used to run the following command to share the folder with container
docker run ... -v c:/seleniumplus:/dev/seleniumplus ...
But it cannot work anymore.
I am using the Windows 10 as host.
My docker has recently been upgraded to "19.03.5 build 633a0e".
I did change my windows password recently.
I followed the instructions to re-share the "C" drive, and restarted the docker and even restarted the computer, but it didn't work :-(.
All of sudden, I found that the folder is "C:\SeleniumPlus" in the file explorer, so I ran
docker run ... -v C:/SeleniumPlus:/dev/seleniumplus ...
And it did work. So it is case-sensitive when we specify the windows shared folder in the latest docker ("19.03.5 build 633a0e").
I am working in Linux (WSL2 under Windows, to be more precise) and my problem was that there existed a symlink for that folder on my host:
# docker run --rm -it -v /etc/localtime:/etc/localtime ...
docker: Error response from daemon: mkdir /etc/localtime: file exists.
# ls -al /etc/localtime
lrwxrwxrwx 1 root root 25 May 23 2019 /etc/localtime -> ../usr/share/zoneinfo/UTC
It worked for me to bind mount the source /usr/share/zoneinfo/UTC instead.
I had this issue when I was working with Docker in a CryFS -encrypted directory in Ubuntu 20.04 LTS. The same probably happens in other UNIX-like OS-es.
The problem was that by default the CryFS-mounted virtual directory is not accessible by root, but Docker runs as root. The solution is to enable root access for FUSE-mounted volumes by editing /etc/fuse.conf: just comment out the use_allow_other setting in it. Then mount the encrypted directory with the command cryfs <secretdir> <opendir> -o allow_root (where <secretdir> and <opendir> are the encrypted directory and the FUSE mount point for the decrypted virtual directory, respectively).
Credits to the author of this comment on GitHub for calling my attention to the -o allow_root option.
Had the exact error. In my case, I used c instead of C when changing into my directory.
I solved this by restarting docker and rebuilding the images.
I have put the user_allow_other in /etc/fuse.conf.
Then mounting as in the example below has solved the problem.
$ sshfs -o allow_other user#remote_server:/directory/
I had this issue in WSL, likely caused by leaving some containers alive too long. None of the advice here worked for me. Finally, based on this blog post, I managed to fix it with the following commands, which wipe all the volumes completely to start fresh.
docker-compose down
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up
Then, I restarted WSL (wsl --shutdown), restarted docker desktop, and tried my command again.
In case you work with a separate Windows user, with which you share the volume (C: usually): you need to make sure it has access to the folders you are working with -- including their parents, up to your home directory.
Also make sure that EFS (Encrypting File System) is disabled for the shared folders.
See also my answer here.
I had the same issue when developing using docker. After I moved the project folder locally, Docker could not mount files that were listed with relatives paths, and tried to make directories instead.
Pruning docker volumes / images / containers did not solve the issue. A simple restart of docker-desktop did the job.
This error crept up for me because the problem was that my docker-compose file was looking for the APPDATA path on my machine on mac OS. MacOS doesn't have an APPDATA environment variable so I just created a .env file with the contents:
APPDATA=~/Library/
And my problem was solved.
I faced this error when another running container was already using folder that is being mounted in docker run command. Please check for the same & if not needed then stop the container. Best solution is to use volume by using following command -
docker volume create
then Mount this created volume if required to be used by multiple containers..
For anyone having this issue in linux based os, try to remount your remote folders which are used by docker image. This helped me in ubuntu:
sudo mount -a
I am running docker desktop(docker engine v20.10.5) on Windows 10 and faced similar error. I went ahead and removed the existing image from docker-desktop UI, deleted the folder in question(for me deleting the folder was an option because i was just doing some local testing), removed the existing container, restarted the docker and it worked
In my case my volume path (in a .env file for docker-compose) had a space in it
/Volumes/some\ thing/folder
which did work on Docker 3 but didn't after updating to Docker 4. So I had to set my env variable to :
"/Volumes/some thing/folder"
I had this problem when the directory on my host was inside a directory mounted with gocryptfs. By default even root can't see the directory mounted by gocryptfs, only the user who executed the gocryptfs command can. To fix this add user_allow_other to /etc/fuse.conf and use the -allow_other flag e.g. gocryptfs -allow_other encrypted mnt
In my specific instance, Windows couldn't tell me who owned my SSL certs (probably docker). I took control of the SSL certs again under Properties, added read permission for docker-users and my user, and it seemed to have fixed the problem. After tearing my hair out for 3 days with just the Daemon: Access Denied error, I finally got a meaningful error regarding another answer above "mkdir failed" or whataever on a mounted file (the SSL cert).

Does data needs to have a specific format to upload it in Docker?

I would like to know if there is a specific way to upload data to Docker, I've been stuck on this during a week and I am sure the answer will be something simple.
Does anyone know? I am working with a windows 10 machine.
You can mount directories on the host system inside the container and access their contents that way, if that's what you mean by 'data'.
You should check out Manage data in containers for more info.
You can use the docker cp command to copy the file.
For eg: If you want to copy abc.txt to location /usr/local/folder inside some docker container(you can get docker container name from NAMES column by executing command docker ps.) then you just execute,
docker cp abc.txt ContainerName:/usr/local/folder
(abc.txt is a local to the foler from where you are executing the command. You can provide the full path of the file.)
After this just get into the container by,
docker exec -it ContainerName bash
then cd /usr/local/folder. you will see your file copied their.

Resources