I have a docker-compose.yml
version: '3.3'
services:
ssh:
environment:
- TZ=Etc/UTC
- DEBIAN_FRONTEND=noninterative
build:
context: './'
dockerfile: Dockerfile
ports:
- '172.17.0.2:22:22'
- '443:443'
- '8025:8025'
volumes:
- srv:/srv:rw
restart: always
volumes:
srv:
After I run docker-compose up --build I can ssh to the docker vm and there are files in /srv. 'docker volume ls' shows 2 volumes, srv and dockersetupsrv. They are both in /var/lib/docker/volumes. They both contain _data directories and show creation time stamps that match the docker image creation times but are otherwise empty. Neither one contains any of the files that are in the docker container's /srv directory. How can I share the docker /srv directory with the host?
you should point out more specific for the mapping directory,
for example:
/srv:/usr/srv:rw
after that, when you add content inside your host machine /srv,it is automatically map into /usr/srv
--> make sure that directory exist
you can have a check in this link : https://docs.docker.com/storage/volumes/
i am running docker on windows 10, and have a jenkins container
i can use container jenkins pipeline to build host image
docker -H host.docker.internal:2375 tag myproject:1.0 myproject:latest
i can start host container use docker-compose
docker-compose -H host.docker.internal:2375 -f /var/jenkins_home/myproject/docker-compose.yml up -d
the only issue is, if have 'volumes' in docker-compose.yml, it will display error below.
Named volume "C:\docker\myproject:/target/myproject" is used in service "myproject" but no declaration was found in the volumes section.
docker-compose.yml file
version: '3.9'
services:
myproject:
image: myproject:latest
user: root
container_name: myproject
volumes:
- C:\docker\myproject:/target/myproject
ports:
- 8080:8080
i understand it is because jenkins container cannot found 'C:\docker\myproject', but i want share this folder between host and myproject container.
i tried use below command in jenkins container but it is not working, -f only can read local container file
docker-compose -H host.docker.internal:2375 -f c:/myproject/docker-compose.yml up -d
any idea can run docker-compose with volumes in jenkins container to control host docker?
update problem solved by below
version: '3.9'
services:
myproject:
image: myproject:latest
user: root
container_name: myproject
volumes:
- type: bind
source: C:\docker\myproject
target: /target/myproject
ports:
- 8080:8080
I'm tring to bind container's content to host folder, so that i can easy edit it, but for some reason it doesn't work!
here my docker-compose file:
version: "3"
services:
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./config:/etc/nginx/conf.d
Please note that this is my folder structure before the docker-compose command:
-project
--docker-compose.yaml
Thank you in advance
Mounting a folder from the container on the host is not possible.
To achieve what you want consider the following
First launch the container without any volumes defined
Run docker cp webserver:/etc/nginx/conf.d/. ./config to copy the content of /etc/nginx/conf.d/ to your config folder on the host
Kill the container and relaunch it with the config folder mounted on /etc/nginx/conf.d (like in your original example). This will shadow the nginx config in the container with the one on your local machine
When editing the local files it will reflect in the container.
If you want to persist your changes in the image after you are done, create a new Docker image by building the following Dockerfile
FROM nginx:mainline-alpine
COPY ./config/* /etc/nginx/conf.d/
I'm using https://github.com/sagemathinc/cocalc-docker on Linux. I want to be able to edit my host files from within Cocalc. How do I make a folder that is a symbolic link to my host user's home?
If I understood you correctly you are looking for the -v flag when starting a new container.
So something like: docker run -v path:path -t myimage
You can mount your local dir on to your container using docker Volumes. Here's the doc on that https://docs.docker.com/engine/admin/volumes/volumes/#start-a-container-with-a-volume. Here's an example docker-compose.yml file:
version: "3.1"
services:
php-fpm:
build: docker/php-fpm
container_name: vendorapps-php-fpm
working_dir: /application
volumes:
- /localdirOnHost:/DestpathOnContainer
I've got a docker-compose.yml like this:
db:
image: mongo:latest
ports:
- "27017:27017"
server:
image: artificial/docker-sails:stable-pm2
command: sails lift
volumes:
- server/:/server
ports:
- "1337:1337"
links:
- db
server/ is relative to the folder of the docker-compose.yml file. However when I docker exec -it CONTAINERID /bin/bash and check /server it is empty.
What am I doing wrong?
Aside from the answers here, it might have to do with drive sharing in Docker Setting. On Windows, I discovered that drive sharing needs to be enabled.
In case it is already enabled and you recently changed your PC's password, you need to disable drive sharing (and click "Apply") and re-enable it again (and click "Apply"). In the process, you will be prompted for your PC's new password. After this process, run your docker command (run or compose) again
Try using:
volumes:
- ./server:/server
instead of server/ -- there are some cases where Docker doesn't like the trailing slash.
As per docker volumes documentation,
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
The host-dir can either be an absolute path or a name value. If you
supply an absolute path for the host-dir, Docker bind-mounts to the
path you specify. If you supply a name, Docker creates a named volume
by that name
I had similar issue when I wanted to mount a directory from command line:
docker run -tid -p 5080:80 -v /d/my_project:/var/www/html/my_project nimmis/apache-php5
The container has been started successfully but the mounted directory was empty.
The reason was that the mounted directory must be under the user's home directory. So, I created a symlink under c:\Users\<username> that mounts to my project folder d:\my_project and mounted that one:
docker run -tid -p 5080:80 -v /c/Users/<username>/my_project/:/var/www/html/my_project nimmis/apache-php5
If you are using Docker for Mac then you need to go to:
Docker Desktop -> Preferences -> Resources -> File Sharing
and add the folder you intend to mount. See the screenshot:
I don't know if other people made the same mistake but the host directory path has to start from /home
So my msitake was that in my docker-compose I was WRONGLY specifying the following:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- /Desktop/subfolder/subfolder2:/app/subfolder
When the host path should have been full path from /home. something like:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- home/myuser/Desktop/subfolder/subfolder2:/app/subfolder
On Ubuntu 20.04.4 LTS, with Docker version 20.10.12, build e91ed57, I started observing a similar symptom with no apparent preceding action. After a docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command, with no changes to one of the services (production-001-volumeConsumingService is up-to-date), a part of the volumes stopped mounting.
# deploy/docker-compose.yml
version: "3"
services:
...
volumeConsumingService:
container_name: production-001-volumeConsumingService
hostname: production-001-volumeConsumingService
image: group/production-001-volumeConsumingService
build:
context: .
dockerfile: volumeConsumingService.Dockerfile
depends_on:
- anotherServiceDefinedEarlier
restart: always
volumes:
- ../data/certbot/conf:/etc/letsencrypt # mouning
- ../data/certbot/www:/var/www/certbot # not mounting
- ../data/www/public:/var/www/public # not mounting
- ../data/www/root:/var/www/root # not mounting
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- default
- external
...
networks:
external:
name: routing
A workaround that seems to be working is to enforce a restart on the failing service immediately after the docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command:
docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build && docker stop production-001-volumeConsumingService && docker start production-001-volumeConsumingService
In the case when the volumes are not mounted after a host reboot, adding a cron task to restart the service once should do.
In my case, the volume was empty because I did not use the right path format without quotes.
If you have a relative or absolute path with spaces in it, you do not need to use double quotes around the path, you can just use any path with spaces and it will be understood since docker-compose has the ":" as the delimiter and does not check spaces.
Ways that do not work (double quotes are the problem!):
volumes:
- "MY_PATH.../my server":/server
- "MY_PATH.../my server:/server" (I might have missed testing this, not sure!)
- "./my server":/server
- ."/my server":/server
- "./my server:/server"
- ."/my server:/server"
Two ways how you can do it (no double quotes!):
volumes:
- MY_PATH.../my server:/server
- ./my server:/server