Docker device path differs depending on commands - docker

After recent Docker update (Docker Desktop for Mac) my stack broke.
I'm using my docker-compose config in two ways - with up command and with run command to execute some operations via containers (when it's not up yet).
I'm getting error:
ERROR: Configuration for volume my_code specifies "device" driver_opt /Users/me/Projects/project/backend/my_code, but a volume with the same name uses a different "device" driver_opt (/host_mnt/Users/me/Projects/project/backend/my_code). If you wish to use the new configuration, please remove the existing volume "my_code" first:
I have configured docker-compose with volumes shared between containers.
volumes:
my_code:
driver: local
driver_opts:
type: none
device: ${PWD}/project/backend/my_code
o: bind
Looks like for some reason up and run commands get's different path from $PWD in docker-compose. One get's prefixed with /host_mnt and the other doesn't. Is this a bug or maybe my config is invalid?
Docker for Mac 2.4.0.0 stable
Docker Compose 1.27.4
Catalina

I just had the same issue with the prefix /host_mnt on a Ubuntu system.
This is something related to Docker Desktop.
The solution for me was to uninstall docker and Docker desktop according to the documentation
https://docs.docker.com/desktop/install/ubuntu/
https://docs.docker.com/engine/install/ubuntu/
And afterwards also delete the docker config files manually.
rm -rf ~/.docker
Then I just installed the docker engine instead of the docker desktop.
The problem was caused by an update on Docker Desktop that adds that /host_mnt prefix for compatibility with Windows users.

Related

Docker compose volumes where can be found on host windows

I have docker-compose file with volumes section for given container:
video-streaming:
image: video-streaming
build:
context: ./video-streaming
dockerfile: Dockerfile-dev
container_name: video-streaming
volumes:
- /tmp/history/npm-cache:/root/.npm:z
I'm running docker on windows and image is linux based.
When I enter container and add file to /root/.npm and then close the container and run it again then the file is still there so this volume works. But the question is where can I find it's location on Windows host?
You should find the volumes in C:\ProgramData\docker\volumes. The filename will be a hash, which you can check with docker inspect.
If not, then note that you are simply mounting a host directory /tmp/history/npm-cache to your container. This directory is your volume.
When using docker for windows the question is if you are using the old Docker Toolbox or the newer ones that use WSL/WSL2
Docker Desktop configured Linux Containers and WSL/WSL2
The docker engine is actually not running on the windows, but inside the WSL instance, docker desktop makes docker commands available on the windows for ease of use.
So the volumes are probably inside that WSL instance (linux)
you can find out what WSL instances you have by typing wsl -l in powershell.
their file-system is available in \\\wsl$ path on windows.
In your case, the volume is not named, its in the exact location you specified for it.
/tmp/history/npm-cache but inside the WSL instance that docker engine is installed on.
Through WSL
in powershell write wsl ls /tmp/history, you should see npm-cache there.
wsl command allows piping linux commands that will be run on the actual linux wsl instance (default one) which is probably the one running the docker engine.
alternatively, you can connect to that linux by just typing wsl and going to that path cd /tmp/history
once inside the wsl instance you can write explorer.exe . to open explorer in that location (on windows)
notice that the path will always start with \\wsl$ so you can go to that path on windows and see all of you wsl instances and their file-systems, try to search for "npm-cache" in explorer, you might find it.
via Docker commands
docker volume ls will give you all of the available volumes. yours is not named, so its probably one of the 'UUID' ones. you can inspect each one to find its location (probably still inside the wsl instance)
docker volume inspact {the-uuid-of-the-volume}
ones you inspect it, you will see each volume has a Mountpoint field which points to the location of the volume (inside the wsl instance)
unnamed volumes are created with different permissions from your user, so you might need sudo to interact with them via the wsl terminal.
if its through windows file explorer on \\wsl$ you might not need extra permissions.

Docker volumes on WSL2 using Docker Desktop

I'm just trying out WSL 2 with Docker for Windows and I'm having an issues with mounted volumes :
version: "3.7"
services:
node:
build: .
container_name: node
hostname: node
volumes:
- ./app:/app
stdin_open: true
the container build and start well, I access it with docker exec nicely but the /app folder inside the container isn't bound to my laptop app folder. However the right path is actually correctly mounted on the running container :
(here I do pwd on the host to if it matches perfectly with what is mounted on the container)
➜ app pwd
/mnt/c/Users/willi/devspace/these/app
And this is screen of portainer telling me what path are mounted where in the container and everything matches.
The file I create int he app folder on the host are not visible in the app folder of the container and vice-versa. This is weird and I don't know how to debug it.
Complementary infos:
Windows 10 Pro 10.0.19041
Docker for Windows version : 2.3.0.4
docker version output in WSL : 19.03.12
docker-compose version : 1.26.2
Thanks
As #Pablo mentioned, the Best-Practice seems to be using WSL File system for mapping Volumes.
Take a look at the Docker Documentation concerning WSL2:
Best practices
To get the best out of the file system performance when bind-mounting files:
Store source code and other data that is bind-mounted into Linux containers (i.e., with docker run -v <host-path>:<container-path>) in the Linux filesystem, rather than the Windows filesystem.
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
If you have concerns about the size of the docker-desktop-data VHDX, or need to change it, take a look at the WSL tooling built into Windows.
If you have concerns about CPU or memory usage, you can configure limits on the memory, CPU, Swap size allocated to the WSL 2 utility VM.
To avoid any potential conflicts with using WSL 2 on Docker Desktop, you must uninstall any previous versions of Docker Engine and CLI installed directly through Linux distributions before installing Docker Desktop.
Everything works perfectly now, it seems that my problem was that my WSL distro was still in version 1. You can verify it with the command : wsl -l -v
NAME STATE VERSION
* docker-desktop-data Stopped 2
docker-desktop Stopped 2
Ubuntu-20.04 Running 2 <- This was at 1
Upgrade to WSL2

docker-compose build and up

I am not an advance user so please bear with me.
I am building a docker image using docker-compose -f mydocker-compose-file.yml ... on my machine.
The image then been pushed to a remote docker registry.
Then from a remote server I pull down this image.
To run this image; I have to copy mydocker-compose-file.yml from my machine to remote server and then run docker-compose -f mydocker-compose-file.yml up -d.
I find this very inefficient as why I need the same YAML file to run the docker image (should I?).
Is there a way to just spin up the container without this file from remote machine?
As of compose 1.24 along with the 18.09 release of docker (you'll need at least that client version on the remote host), you can run docker commands to a remote host over SSH.
# all docker commands in this shell will not talk to the remote host
export DOCKER_HOST=ssh://user#host
# you can verify that with docker info to see which engine you're talking to
docker info
# and now run your docker-compose up command locally to start/stop containers
docker-compose up -d
With previous versions, you could configure TLS certificates to allow specific clients to connect to the docker API over a network connection. See these docs for more details.
Note, if you have host volumes, the variables and paths will be expanded to your laptop directories, but the host mounts will happen on the remote server where those directories may not exist. This is a good situation to switch to named volumes.
Everything you can do with Docker Compose, you can do with plain docker commands.
Depending on how exactly you're interacting with the remote server, your tooling might have native ways to do this. One specific example I'm familiar with is the Ansible docker_container module. If you're already using a tool like Ansible, Chef, or Salt, you can probably use a tool like this to do the same thing your docker-compose.yml file does.
But otherwise there's more or less a direct translation between a docker-compose.yml file
version: '3'
services:
foo:
image: me/foo:20190510.01
ports: ['8080:8080']
and a command line
docker run -d --name foo -p 8080:8080 me/foo:20190510.01
My experience has been that the docker run commands quickly become unwieldy and you want to record them in a file; and once they're in a file, you start to wish they were in a more structured format, even if you need an auxiliary tool to run them; which brings you back to copying around the docker-compose.yml file. I think that's pretty routine. (Something needs to tell the server what to run.)

Docker-compose can't find suitable file, even though it exists

So I am running ubuntu 18.04 lts on windows 10 through hyper-v and I'm trying to run the docker compose command through the terminal. When inside the docker folder and I run ls, it says there is a docker-compose.yml file. Still when I run docker compose command, it says no suitable configuration file is found.
docker-compose up -d
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
I'm using docker version 18.09.0 and docker compose version 1.22.0
On Ubuntu Ubuntu 18.04.2 LTS I was facing the same issue. I don't know the exact reason but Docker and Docker Compose installed with snap was not working.
sudo snap remove docker && sudo snap remove docker-compose
Installed docker from Officials Docs here and compose from here via apt and and now my Docker Compose file start working
Found out the problem was using shared folder functionality with Hyper V and windows 10. For some reason docker won't work when the docker-compose.yml file is located inside that, however when I move it outside of the shared(e.g. home directory) folder it does work. So if I want to use the docker file I have to place it outside the shared folder to make it work. Rest of the project runs as expected, so a little workaround ......
Was getting the same error on Ubuntu 20.04. I could not use apt to install docker-compose (Forbidden errors on apt's sources.list and out of my control) so had to install docker-compose with snap
sudo snap install docker
That is a "caveat" of docker installed with snap. From docker-snap README: "All files that docker needs access to should live within your $HOME folder". So that is expected behavior.
Move your project to $HOME folder. For example, I had my project in /usr/local/src/my_project and had to move to ~/some_folder/my_project and then could run docker-compose.
It will be a bit stupid answer I believe but as I am new to using docker desktop on windows. I was trying to run a script file using Ubuntu distro 20.04 LTS. the path I was providing was not correct. because windows user will use path like this in my case C:\Users\usera\Desktop\Terminology-service\ols but here when I changed it to /mnt/c/Users/usera/Desktop/Terminology-service/ols it worked.

Run jhipster-registry in production

This is a continuation of my previous question about running a jhipster microservices application on AWS.
I've used docker-machine to create a new VM with Docker installed.
I have setup docker registry, and pushed my images to it, as well as logged into this registry on the AWS-VM.
I attempted to copy the contents of the /docker-composer directory I generated using yo jhipster:docker-compose and attempted to run:
docker-compose up -d
But I receive the error:
ubuntu#aws-test:~/docker-compose$ sudo docker-compose up
Unsupported config option for services service: 'jhipster-registry'
I can manually run the jhipster-registry with docker, but as there are many other underlying services I'd prefer to create a production docker-compose.yml file.
It looks like you're using an older version of docker-compose that doesn't support the V2 format. You need to upgrade to at least 1.6.2 (but 1.7.0 is currently the latest).
Aside of your docker-compose.yml you should have the jhipster-registry.yml and the elk.yml files, if one of those files are not present it won't work because the docker-compose file is looking for those files.
If you want to have all in one file you have to copy the jhipster-registry service in jhipster-registry.yml into your docker-compose.yml.

Resources