I am trying to use the following docker-stack.yml file to deploy my services to my Docker Swarm version 17.06-ce. I want to use volumes to map the C:/logs directory on my Windows host machine to the /var/log directory inside my container.
version: '3.3'
services:
myapi:
image: mydomain/myimage
ports:
- "5000:80"
volumes:
- "c:/logs:/var/log/bridge"
When I remove the volumes section, my containers start fine. After adding the volume, the container never even attempts to start i.e.
docker container ps --all does not show my container.
docker events does not show the container trying to start.
The following command works for me, so I know that my syntax is correct:
docker run -it -v "c:/logs:/var/log/bridge" alpine
I've read the volumes documentation a few times now. Is the syntax for my volume correct? Is this a supported scenario? Is this a Docker bug?
Docker run will work when you run it in version 2 and with docker-compose we can run the custom volume mounting.
In version three we have to use the named volumes with default volume path or custom path.
Here is the docker-compose with default volume
version: "3.3"
services:
mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
volumes:
db-data:
volume is mounted to default /var/lib/docker/volumes/repo/_data
We have option to mount the custom path to the volume
version: "3.3"
services:
mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
volumes:
db-data:
driver: local
driver_opts:
o: bind
type: none
device: /home/ubuntu/db-data/
VOLUMES FOR SERVICES, SWARMS, AND STACK FILES
Related
I would like to mount a directory from inside a docker to my linux Ubuntu host machine using docker-compose.yml.
The directory in the docker container is /usr/local/XXX and I want to mount it on /home/Projects/XX
How can I make it happen?
This is my docker-compose.yml file:
version: '3'
services:
MyContainer:
image: XX.XXX.XXX.XXX:XXXX/XXX/MyContainer:latest
restart: always
container_name: MyContainer
hostname: MyContainer_docker
privileged: true
ports:
- "XXXX:XX"
volumes:
- /home/Project/workspace/XXX/XXXX:/home/XX
environment:
- ...
extra_hosts:
- ...
networks:
net_plain3:
ipv4_address: ...
networks:
# ...etc...
It is possible with the right driver options.
Technically, you still mount the host directory to the container, but the result is that the host directory is populated with the data in the container directory. Usually it's the other way around. That's why you need those driver options.
services:
somebox:
volumes:
- xx-vol:/usr/local/XXX
volumes:
xx-vol:
driver: local
driver_opts:
type: none
o: bind
device: /home/Projects/XX
Empty named volumes are initialized with the content of the image at the mount location when the container is created.
- bmitch
So the key here is to create a named volume that uses as device the desired location on the host.
As a full working demonstration.
I create the following Dockerfile to add text file in the /workspace dir:
FROM busybox
WORKDIR /workspace
RUN echo "Hello World" > hello.txt
Then a compose.yaml to build this image and mount a volume with these driver options:
services:
databox:
build: ./
volumes:
- data:/workspace
volumes:
data:
driver: local
driver_opts:
type: none
o: bind
device: /home/blue/scrap/vol/data
Now I run the below commands:
$ mkdir data
$ docker-compose up
[+] Running 1/0
⠿ Container vol-databox-1 Created 0.0s
Attaching to vol-databox-1
vol-databox-1 exited with code 0
$ cat /home/blue/scrap/vol/data/hello.txt
Hello World
As you can see, the hello.txt file ended up on the host. It was not created after container startup but was already inside the container's file system when the container started, since it has been added during build.
That means, it is possible to populate a host directory with data from a container in such a way that the data doesn't have to be generated after volume mount, which is usually the case.
I'm trying to create a service that must join the existing stack, so I force the compose to use the same network.
Surely, my network is persists
docker network ls
NETWORK ID NAME DRIVER SCOPE
oiaxfyeil72z ELK_default overlay swarm
okhs1e1wu73y ELK_elk overlay swarm
My docker-compose.yml
version: '3.3'
services:
logstash:
image: docker.elastic.co/logstash/logstash-oss:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/:/usr/share/logstash/pipeline/:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
external:
name: ELK_elk
the other services was created with
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
check with docker stack services
docker stack services ELK
ID NAME MODE REPLICAS IMAGE PORTS
c0rux6mdvzq3 ELK_kibana replicated 1/1 docker.elastic.co/kibana/kibana:7.5.1 *:5601->5601/tcp
j824fd0blxdp ELK_elasticsearch replicated 1/1 docker.elastic.co/elasticsearch/elasticsearch:7.5.1 *:9200->9200/tcp, *:9300->9300/tcp
Then trying to bring the service up with docker-compose up -d. The service doesn't create but produce the error
docker-compose up -d
WARNING: Some services (logstash) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Removing tmp_logstash_1
Recreating bbf503fc3eaa_tmp_logstash_1 ... error
ERROR: for bbf503fc3eaa_tmp_logstash_1 Cannot start service logstash: Could not attach to network ELK_elk: rpc error: code = PermissionDenied desc = network ELK_elk not manually attachable
ERROR: for logstash Cannot start service logstash: Could not attach to network ELK_elk: rpc error: code = PermissionDenied desc = network ELK_elk not manually attachable
ERROR: Encountered errors while bringing up the project.
The issue is due to the fact that the elk network is defined as an "overlay" network. It's a docker swarm feature so docker-compose does not know how to deal with it.
Instead of using docker-compose up you need to deploy a docker swarm stack:
docker stack deploy -c docker-compose.yml <service_name>
You can refer to the Docker documentation for more info:
https://docs.docker.com/network/
For some reason non manager nodes only see networks with active containers using it (Run on non manager node):
docker run --rm -d --name dummy busybox # Run a dummy container
docker network connect [OVERLAY_NETWORK] dummy # Connect to overlay network
now network is available on non manager node and you can run:
docker compose -f compose.yaml -p project up -d
docker stop dummy # Remove dummy container
Compose file:
networks:
db:
external: true
driver: overlay
I'm trying to get my head around docker and confused how the volumes work.
On the official documentation there's a bit of code similar to the docker-compose.yml file I'm working with.
version: "3.7"
services:
wordpress:
image: wordpress
ports:
- "8080:80"
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: vip
mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
volumes:
db-data:
networks:
overlay:
I'm confused as to what the db-data:/var/lib/mysql/data refers to and where docker is actually storing the database data, especially as /var/lib/mysql doesn't exist as a directory on the host.
So this line
volumes:
- db-data:/var/lib/mysql/data
basically mounts the volume db-data from host into /var/lib/mysql/data in the container.
If db-data doesn't exist Docker will create one for you in Host.
You can read more here.
This is no bin-mount. The docker-compose file creates a volume named db-data that is not linked to any path (by the last entry "db-data: ") and is taken care of by docker internally (will probably turn up inside /var/lib/docker or similar, but this wouldn't matter because you usually use this if you don't care where the data is put, you just want persistence).
The db-data volume that is administered by docker on the host machine is then linked to the path /var/lib/mysql/data in the container, so anything within this path in the container will be persistent for as long as the volume isn't deleted on the host.
I have one server where I create an overlay network with the following command:
docker network create --driver=overlay --attachable caja_gestiones
In server two I want to use my docker compose to deploy all my containers and one of them use the gestiones network and the default network, this is my docker-compose.yml:
version: '3.3'
services:
msgestiones:
image: msgestiones:latest
hostname: msgestiones
container_name: msgestiones
environment:
- perfil=desarrollo
- JAVA_OPTS=-Xmx512M -Xms512M
- TZ=America/Mexico_City
networks:
- marcador
- caja_gestiones
msmovimientos:
image: msmovimientos:latest
hostname: msmovimientos
container_name: msmovimientos
environment:
- perfil=desarrollo
- JAVA_OPTS=-Xmx512M -Xms512M
- TZ=America/Mexico_City
networks:
- marcador
networks:
marcador:
driver: bridge
caja_gestiones:
external:
name: caja_gestiones
When I ran the docker compose it throws an error saying that the network does not exists, but if I run a dummy container using that network, the network appear and the compose works, how can I make the compose use the overlay network without run a dummy container before?
Did you try to deploy it as a stack instead of a compose? You can use the same compose file, but deploy it with docker stack deploy -c composefile.yaml yourstackname?
I want to setup ownCloud with Docker and Docker-Compose. To achieve this I have a docker-compose.yml with 3 containers and their volumes.
version: '2'
services:
nginx:
build: ./nginx
networks:
- frontend
- backend
volumes:
- owncloud:/var/www/html
owncloud:
build: ./owncloud
networks:
- backend
volumes:
- owncloud:/var/www/html
- data:/data
mysql:
build: ./mariadb
volumes:
- mysql:/var/lib/mysql
networks:
- backend
volumes:
owncloud:
driver: local
data:
driver: local
mysql:
driver: local
networks:
frontend:
driver: bridge
backend:
driver: bridge
I also tried it without the data volume. ownCloud could not write to /data or without this volume to /var/www/html/data. The log only shows timestamps whenever I accessed ownCloud. Changing from data:/data to a hosted volume /var/ownclouddata:/data results in no difference.
The Dockerfiles have only one line each: FROM:image
I´ve tried adding RUN mkdir /data, but it didn´t fix anything.
You need to mount the volumes in the Dockerfile something like this.
VOLUME /data
Later in your docker-compose file, you can either use a named volume like you did earlier or simply use it like this.
/mnt/test:/data
Here /mnt/test is your host volume path and /data is your docker container path.
Hope it helps!