How to reference a WSL path in docker-compose, in Windows - docker

I would like to be able to reference a Dockerfile which is inside of a WSL container, from docker-compose which is running on Windows
The docker-compose file is inside of \\wsl$$\Ubuntu-20.04\home\marvinirwin\WebstormProjects\epub-finder, and so is the Dockerfile its tryin gto reference
version: "3.9"
services:
server:
build: \\wsl$$\Ubuntu-20.04\home\marvinirwin\WebstormProjects\epub-finder\server.dev.Dockerfile
ports:
- "3001:3001"
With that confirmation, when I run docker-compose up it produces the following error
unable to prepare context: path "\\\\?\\\\\\wsl$\\Ubuntu-20.04\\home\\marvinirwin\\WebstormProjects\\epub-finder\\server
.dev.Dockerfile" not found
The file exists at the path provided. The error message suggests that the path has been escape incorrectly. I'm not sure what the question mark is doing.
I have tried using the relative path ./server.dev.Dockerfile, but the error message is identical.
I'm using docker desktop on Windows, and both docker WSL instances and the Ubuntu instance I'm using are all WSL version 2.
I've read that UNC paths don't work, so I symlinked my container, and am able to ls C:\\wsl\\home\\marvinirwin\\WebstormProjects\\epub-finder\\server.dev.Dockerfile successfully
However, the configuration build: C:\wsl\home\marvinirwin\WebstormProjects\epub-finder\server.dev.Dockerfile (The same path I was able to ls from in powershell)

Related

When running docker-compose remotely, an error occurs with mounting volumes

I am trying to run a project on docker-compose via a remote server. Everything works, but as soon as I add the item about mounting the volume, it gives an error:
Error response from daemon: invalid mount config for type "bind": invalid mount path: 'C:/Users/user/Projects/my-raspberry-test' mount path must be absolute
To run I use tools from PhpStorm.
The docker-compose.yml file itself looks like this:
version: "3"
services:
php:
image: php:cli
volumes:
- ./:/var/www/html/
working_dir: /var/www/html/
ports:
- 80:80
command: php -S 0.0.0.0:80
I checked by ssh:
Daemon is running,
Docker works (on a similar Dockerfile with the same tasks),
Docker-compose works (on the same file).
Also checked docker remote run using phpstorm and file:
FROM php:cli
COPY . /var/www/html/
WORKDIR /var/www/html/
CMD php -S 0.0.0.0:80
It didn’t give an error and it worked.
OS on devices:
PC: Windows 10
Server: Fedora Server
Without mounting the volume in docker-compose, everything starts. Maybe someone faced a similar problem?
php for an example.
The path must be absolute for the remote host and the project data itself must be loaded there. That is, you need to upload the project to a remote host.
I corrected everything like this:
volumes:
- /home/peter-alexeev/my-test:/var/www/html/

Docker file File shares on Ubuntu host

Good morning,
I am currently try to figure out how I create File shares on Ubuntu as the host OS for docker. On Windows and OSX you can set up Filesharing as below:
I require access to the File share in my docker-compose as an example see below:
version: '3.9'
services:
node_gauc:
image: node-g:v1
ports:
- "444:444" # https test port
volumes:
- ./NodeServer/cert/https.crt:/usr/share/node/cert/https.crt
- ./NodeServer/cert/key.pem:/usr/share/node/cert/key.pem
build:
context: .
dockerfile: ./NodeServer/dockerfile
restart: unless-stopped
container_name: node-g
If I don't have access when I build and start the container I get the following issues:
ERROR: for node-g Cannot start service node_g: error while creating mount source path '/usr/share/t/work/6b37be0079afed03/NodeServer/cert/https.crt': mkdir /usr/share/t: read-only file system
ERROR: for node_g Cannot start service node_g: error while creating mount source path '/usr/share/t/work/6b37be0079afed03/NodeServer/cert/https.crt': mkdir /usr/share/t: read-only file system
ERROR: Encountered errors while bringing up the project.
I am still unsure why its trying to create a directory but I suppose that is another matter.
Is it possible to create File share on a Ubuntu host server similar to what you can on OSX(Mac) or Windows OS?
Many thanks for your help

Mounted directory empty with docker-compose and custom Dockerfile

I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.

What is happening when using ../ with docker-compose volume

I am having problems with writing files out from inside a docker container to my host computer. I believe this is a privilege issue and prefer not to set privileged: True. A work around for writing out files is by pre-pending ../ to a volume in my docker-compose.yml file. For example,
version: '3'
services:
example:
volumes:
- ../:/example
What exactly is ../ doing here? Is it taking from the container's privileges and "going up" a directory to the host machine? Without ../, I am unable to write out files to my host machine.
Specifying a path as the source, as opposed to a volume name, bind mounts a host path to a path inside the container. In your example, ../ will be visible inside the container at /example on a recent version of docker.
Older versions of docker can only access the directory it is in and lower, not higher, unless you specify the higher directory as the context.
To run the docker build from the parent directory:
docker build -f /home/me myapp/Dockerfile
As opposed to
docker build -f /home/me/myapp Dockerfile
Doing the same in composer:
#docker-compose.yml
version: '3.3'
services:
yourservice:
build:
context: /home/me
dockerfile: myapp/Dockerfile
Or with your example:
version: '3'
services:
build:
context: /home/me/app
dockerfile: docker/Dockerfile
example:
volumes:
- /home/me/app:/example
Additionally you have to supply full paths, not relative paths. Ie.
- /home/me/myapp/files/example:/example
If you have a script that is generating the Dockerfile from an unknown path, you can use:
CWD=`pwd`; echo $CWD
To refer to the current working directory. From there you can append /..
Alternately you can build the image from a directory one up, or use a volume which you can share with an image that is run from a higher directory, or you need to output your file to stdout and redirect the output of the command to the file you need from the script that runs it.
See also: Docker: adding a file from a parent directory
The statement volumes: ['../:/example'] makes the parent directory of the directory containing docker-compose.yml on the host (../) visible inside the container at /example. Host directory bind-mounts like this, plus some equivalent constructs using a named volume attached to a specific host directory, are the only way a container can write out to the host filesystem.

Variable substitution not working on Windows 10 with docker compose

I'm wondering if I've stumbled on a bug or that there's something not properly documented about variable substitution on Windows in combination with Docker Machine and Compose (installed version of docker is 1.11.1).
If I run the "docker-compose up" command for a yml file that looks like this:
volumes:
- ${FOOBAR}/build/:/usr/share/nginx/html/
And this variable doesn't exist docker compose will correctly complain about it:
The foobar variable is not set. Defaulting to a blank string.
However, when I change it to an existing environment variable:
volumes:
- ${PROJECT_DIR}/build/:/usr/share/nginx/html/
It will then not properly start the container and displays the following error (trying to access the nginx container will give you a host is unreachable message):
ERROR: for nginx rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: not a directory"
If I run the echo command in the Docker Quickstart Terminal it will output the correct path that I've set in the environment variable. If I replace the ${PROJECT_DIR} with the environment variable value the container runs correctly.
I get the same type of error message if I try to use the environment variable for the official php image instead of the official nginx image. In both cases the docker compose file works if I substitute ${PROJECT_DIR} text with the content of the environment variable.
So is this a bug or am I missing something?
After some mucking about I've managed to get the containers to start correctly without error messages if I use the following (contains the full path to the local files):
volumes:
- ${PROJECT_DIR}:/usr/share/nginx/html/
The nginx container is then up and running though it cannot find the files then anymore. If I replace the variable with the path it contains it then can find the files again.
Above behaviour isn't consistent. When I added a second environment variable for substitution it gave the oci runtime error. Kept giving it when I removed that second variable and only started working again when I also removed the first variable. After that it suddenly accepted ${PROJECT_DIR}/build/ but still without finding files.
Starting a bash session to the nginx container shows that the mount point for the volume contains no files.
I'm really at a loss here what docker is doing and what it expects from me. Especially as I have no idea to what it is expanding the variables in the compose file.
In the end the conclusion is that variable substitution is too quirky on Windows with Docker Machine to be useful. However, there is an alternative to variable substitution.
If you need a docker environment that does the following:
Can deploy on different computers that don't run the same OS
Doesn't care if the host uses Docker natively or via Virtual Box (this can require path changes)
Then your best bet is to use extending.
First you create the docker-compose.yml file that contains the images you'll need. For example an php image with MySQL:
php:
image: 5.5-apache
links:
- php_db:mysql
- maildev:maildev
ports:
- 8080:80
php_db:
image: mariadb
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: examplepass
You might notice that there aren't any volumes defined in this docker-compose file. That is something we're going to define in a file called docker-compose.override.yml:
php:
volumes:
- /workspaces/Eclipse/project/:/var/www/html/
When you have both files in one directory docker-compose does something interesting. It combines them into one adding/overwriting settings in the docker-compose.yml with those present in docker-compose.override.yml.
Then when running the command docker-compose up it will result in a docker run that is configured for the machine you're working on.
You can get similar behaviour with custom files names if you change a few things in your docker-compose command:
docker-compose -f docker-compose.yml -f docker-compose.conf.yml up
The detail is that docker-compose can accept multiple compose files and it will combine them into one. This happens from left to right.
Both methods allows you to create a basic compose file that configures the containers you need. You then can override/add the settings you need for the specific computer you're running docker on.
The page Overview of docker-compose CLI has more details on how these commands work.

Resources