I have a docker-compose with some php, mysql and so on starting. After a few days, I cannot bring them down as everything stopps instead of mysql. It always gives me the following error:
ERROR: network docker_default has active endpoints
this is my docker-compose.yml
version: '2'
services:
php:
build: php-docker/.
container_name: php
ports:
- "9000:9000"
volumes:
- /var/www/:/var/www/
links:
- mysql:mysql
restart: always
nginx:
build: nginx-docker/.
container_name: nginx
links:
- php
- mysql:mysql
environment:
WORDPRESS_DB_HOST: mysql:3306
ports:
- "80:80"
volumes:
- /var/log/nginx:/var/log/nginx
- /var/www/:/var/www/
- /var/logs/nginx:/var/logs/nginx
- /var/config/nginx/certs:/etc/nginx/certs
- /var/config/nginx/sites-enabled:/etc/nginx/sites-available
restart: always
mysql:
build: mysql-docker/.
container_name: mysql
volumes:
- /var/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: pw
MYSQL_USER: florian
MYSQL_PASSWORD: pw
MYSQL_DATABASE: database
restart: always
phpmyadmin:
build: phpmyadmin/.
links:
- mysql:db
ports:
- 1234:80
container_name: phpmyadmin
environment:
PMA_ARBITRARY: 1
PMA_USERNAME: florian
PMA_PASSWORD: pw
MYSQL_ROOT_PASSWORD: pw
restart: always
docker network inspect docker_default gives me:
[
{
"Name": "docker_default",
"Id": "1ed93da1a82efdab065e3a833067615e2d8b76336968a2591584af5874f07622",
"Created": "2017-03-08T07:21:34.969179141Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"85985605f1c0c20e5ee9fedc95800327f782beafc0049f51e645146d2e954b7d": {
"Name": "mysql",
"EndpointID": "84fb19cd428f8b0ba764b396362727d9809cd1cfea536e648bfc4752c5cb6b27",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
UPDATE
Seems that docker rm mysql -f stopped the mysql container, but the network is running.
Removed the network with docker network disconnect -f docker_default mysql But I'm pretty interested in how I got into this situation. any ideas?
I resolved a similar problem when I added this after rename services in by docker-compose.yml file before stop container
docker-compose down --remove-orphans
I'm guessing you edited the docker-compose file while you were currently running...?
Sometimes if you edit the docker-compose file before doing a docker-compose down it will then have a mismatch in what docker-compose will attempt to stop. First run docker rm 8598560 to stop the currently running container. From there, make sure you do a docker-compose down before editing the file. Once you stop the container, docker-compose up should work.
You need to disconnect stale endpoint(s) from the network. First, get the endpoint names with
docker network inspect <network>
You can find your endpoints in the output JSON: Containers -> Name. Now, simply force disconnect them with:
docker network disconnect -f <network> <endpoint>
Unfortunately none of above worked for me. Restarting the docker service solved the problem.
I happened to run into this error message because I had two networks with the same name (copy/paste mistake).
What I did to fix it:
docker-compose down - ignore errors
docker network list - note if any container is using it and stop if if necessary
docker network prune to remove all dangling networks, although you may want to just docker network rm <network name>
rename the second network to a unique name
docker-compose up
This worked for me. systemctl restart docker
service docker restart
then remove the network normally
docker network rm [network name]
This issue happens rarely while running exceptional dozens of services ( in parallel ) on the same instance, but unfortunately docker-compose could not establish deployment / recreation operation of these containers as a result of an "active network interface" that is bind and stuck.
I also enabled running flags such as:
force-recreate ( Recreate containers even if their configuration and image haven’t changed )
or
remove-orphans flags(Remove containers for services not defined in
the Compose file.)
but it did not help.
Eventually I came into conclusion that restarting docker
engine is the last resort - docker closes any network connections that are associated with the docker daemon ( connection between containers and between the api )- this action
solved this problem ( sudo service docker restart )
Related
I want to use a named volume inside my docker compose file which binds to a user defined path in the host. It seems like it should be possible since I have seen multiple examples online one of them being How can I mount an absolute host path as a named volume in docker-compose?.
So, I wanted to do the same. Please bear in mind that this is just an example and I have a use case where I want to use named volumes for DRYness.
Note: I am using Docker for Windows with WSL2
version: '3'
services:
example:
image: caddy
restart: unless-stopped
volumes:
- caddy_data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
volumes:
caddy_data:
name: caddy_data
driver_opts:
o: bind
device: D:\Some\path_in\my\host
type: none
# volumes:
# caddy_data:
# external: true
# name: caddyvol
This does not work and everytime I do docker compose up -d I get the error:
[+] Running 1/2
- Volume "caddy_data" Created 0.0s
- Container project-example-1 Creating 0.9s
Error response from daemon: failed to mount local volume: mount D:\Some\path_in\my\host:/var/lib/docker/volumes/caddy_data/_data, flags: 0x1000: no such file or director
But if I create the volume first using
docker volume create --opt o=bind --opt device=D:\Some\path_in\my\host --opt type=none caddyvol
and then use the above in my docker compose file (see the above file's commented section), it works perfectly.
I have even tried to see the difference between the volumes created and have found none
docker volume inspect caddy_data
[
{
"CreatedAt": "2021-12-12T18:19:20Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "ngrok-compose",
"com.docker.compose.version": "2.2.1",
"com.docker.compose.volume": "caddy_data"
},
"Mountpoint": "/var/lib/docker/volumes/caddy_data/_data",
"Name": "caddy_data",
"Options": {
"device": "D:\\Some\\path_in\\my\\host",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
docker volume inspect caddyvol
[
{
"CreatedAt": "2021-12-12T18:13:17Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/caddyvol/_data",
"Name": "caddyvol",
"Options": {
"device": "D:\\Some\\path_in\\my\\host",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
Any ideas what's going wrong in here?
Finally managed to figure it out thanks to someone pointing out half of my mistake. While defining the volume in the compose file, the device should be in linux path format without the : after the drive name. Also, the version number should be fully defined. So, in the example case, it should be
version: '3.8'
services:
example:
image: caddy
restart: unless-stopped
volumes:
- caddy_data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
volumes:
caddy_data:
name: caddy_data
driver_opts:
o: bind
device: d/Some/path_in/my/host
type: none
But this still did not work. And it seemed to not work only in Windows Docker Desktop. So, I went into \\wsl.localhost\docker-desktop-data\version-pack-data\community\docker\volumes and checked the difference between the manually created volume and the volume generated from the compose file.
The only difference was in the MountDevice key in the opts.json file for each. The manually created file had /run/desktop/mnt/host/ appended to the path provided. So, I updated my compose file to
version: '3.8'
services:
example:
image: caddy
restart: unless-stopped
volumes:
- caddy_data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
volumes:
caddy_data:
name: caddy_data
driver_opts:
o: bind
device: /run/desktop/mnt/host/d/Some/path_in/my/host
type: none
And this worked!
I'm unable to mount a host directory (on a rasberry pi) to a docker container api_service. Even with host chmod -R 777.
I was able to mount it running the api_service from commandline docker start --mount type=bind,src=/data/yarmp-data,target=/data/yarmp-data docker_api_service_1 and docker inspect containerId in this case the mount section was telling me the mount was done and inside the container it was the case. But I'd like to achieve that with docker-compose.
I tried different syntaxes into the docker-compose.yaml file but never achieving it. Every time removing all containers, images, then docker-compose build and docker-compose up.
What am I missing? is there a way to trace the mount options at startup of the container?
Should the target directory have been created into the target image before mounting it on docker-compose.yaml?
docker-compose.yaml
#Doc: https://github.com/compose-spec/compose-spec/blob/master/spec.md
version: '3.2'
services:
api_service:
build: ./api_service
restart: always
ports:
- target: 8080
published: 8080
depends_on:
- postgres_db
links:
- postgres_db:yarmp-db-host # database is postgres_db hostname into this api_service
volumes:
- type: bind
source: $HOST/data/yarmp-data #Host with this version not working
source: /data/yarmp-data #Host absolute path not working
#source: ./mount-test #not working either
target: /data/yarmp-data
#- /data/yarmp-data:/data/yarmp-data # not working either
postgres_db:
build: ./postgres_db
restart: always
ports:
- target: 5432
published: 5432
env_file:
- postgres_db/pg-db-database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/
postgres_db/Dockerfile
FROM postgres:latest
LABEL maintainer="me#mail.com"
RUN mkdir -p /docker-entrypoint-initdb.d
COPY yarmp-dump.sql /docker-entrypoint-initdb.d/
api_service/Dockerfile
FROM arm32v7/adoptopenjdk
LABEL maintainer="me#mail.com"
RUN apt-get update
RUN apt-get -y install git curl vim
CMD ["/bin/bash"]
#csv files data
RUN mkdir -p /data/yarmp-data #Should I create it or not??
RUN mkdir -p /main-app
WORKDIR /main-app
# JAVA APP DATA
ADD my-api-0.0.1-SNAPSHOT.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar","/main-app/app.jar"]
Seems my entire docker-compose.yaml file was not correct.
As pointed out by #xdhmoore there was an indentation issue, and others.
I figured out by:
validating the docker-compose.yaml with docker-compose config
Tabs are NOT permitted by the YAML specs, USE ONLY SPACES FOR INDENTATION
Note that vim default configuration file /usr/share/vim/vim81/ftplugin/yaml.vim was right replacing tabs with spaces...
The indentation of long syntax was done on my editor with tabs when 2 spaces before were working. Here my final docker-compose.yaml
docker-compose.yaml
version: '3.2'
services:
api_service:
build: ./api_service
restart: always
ports:
- target: 8080
published: 8080 #2 spaces before published
depends_on:
- postgres_db
links:
- postgres_db:yarmp-db-host
volumes:
- type: bind
source: /data/yarmp-data #2 spaces before source, meaning same level as previous '- types:...' and add 2 spaces more
target: /data/yarmp-data #2 spaces before target
postgres_db:
build: ./postgres_db
restart: always
ports:
- target: 5432
published: 5432 #2 spaces before published
env_file:
- postgres_db/pg-db-database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/
volumes:
database-data:
This is based on the YAML in your answer. When I plug it into this yaml to json converter, I get:
{
"version": "3.2",
"services": null,
"api_service": {
"build": "./api_service",
"restart": "always",
"ports": [
{
"target": "8080\npublished: 8080"
}
],
"depends_on": [
"postgres_db"
],
"links": [
"postgres_db:yarmp-db-host"
],
"volumes": [
{
"type": "bind\nsource: /data/yarmp-data"
}
]
},
"postgres_db": {
"build": "./postgres_db",
"restart": "always",
"ports": [
{
"target": "5432\npublished: 5432"
}
],
"env_file": [
"postgres_db/pg-db-database.env"
],
"volumes": [
"database-data:/var/lib/postgresql/data/"
]
},
"volumes": {
"database-data": null
}
}
You can see several places where the result is something like "type": "bind\nsource: /data/yarmp-data".
It appears that YAML is interpreting the source line here as the 2nd line of a multiline string. However, if you adjust the indentation to line up with the t in - type, you end up with:
...
"volumes": [
{
"type": "bind",
"source": "/data/yarmp-data",
"target": "/data/yarmp-data"
}
]
...
The indentation in YAML is tricky (and it matters), so I've found the above and similar tools helpful to get what I want. It also helps me to think about YAML in terms of lists and objects and strings. Here - creates a new item in a list, and type: bind is a key-value in that item (not in the list). Then source: blarg is also a key-value in the same item, so it makes sense that it should line up with the t in type. Indenting more indicates you are continuing a multiline string, and I think if you indented less (like aligning with -), you would get an error or end up adding a key-value pair to one of the objects higher up the hierarchy.
Anyway, it's certainly confusing. I've found such online tools to be helpful.
I need to find volume by label or name easily not by a docker assigned id like:
docker volume ls --filter label=key=value
but if I try add a 'container_name' or 'labels' to docker-compose.yaml I can't see any assigned label of name to volume when I inspect it, here is an output:
>>> docker volume inspect <volume_id>
[
{
"CreatedAt": "2020-10-28T11:41:51+01:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/4dce13df34f4630b34fbf1f853f7b59dbee2e3150a5122fa38d02024c155ec7d/_data",
"Name": "4dce13df34f4630b34fbf1f853f7b59dbee2e3150a5122fa38d02024c155ec7d",
"Options": null,
"Scope": "local"
}
]
I believe I can filter volumes by labels and name.
Here is a part of docker-compose.yml config file for mongo service:
version: '3.4'
services:
mongodb:
container_name: some_name
image: mongo
labels:
com.docker.compose.project: app-name
restart: always
ports:
- 27017:27017
volumes:
- ./mongo:/data/db
I'm not exactly sure what you're tring to acheive here, but I hope something in my response will be helpful.
You can define a named volume within your docker-compose.yml
version: '3.4'
services:
mongodb:
container_name: some_name
image: mongo
labels:
com.docker.compose.project: app-name
restart: always
ports:
- 27017:27017
volumes:
- mongo_db:/data/db
volumes:
mongo_db:
You could then use the docker volume inspect command to see some details about this volume.
docker volume inspect mongo_db
docker-compose.yml
services:
idprovider-app:
container_name: idprovider-app
build:
dockerfile: Dockerfile
context: .
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
volumes:
- keycloak-data-volume:/var/lib/keycloak/data
ports:
- "8090:8090"
- "8443:8443"
volumes:
keycloak-data-volume:
external: true
dockerfile
FROM jboss/keycloak:7.0.1
EXPOSE 8080
EXPOSE 8443
docker inspect "container"
"Mounts": [
{
"Type": "volume",
"Name": "keycloak-data-volume",
"Source": "/mnt/sda1/var/lib/docker/volumes/keycloak-data-volume/_data",
"Destination": "/var/lib/keycloak/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
],
docker volume inspect keycloak-data-volume
[
{
"CreatedAt": "2019-12-10T19:31:55Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/keycloak-data-volume/_data",
"Name": "keycloak-data-volume",
"Options": {},
"Scope": "local"
}
]
There isn't errors, but it doesn't save state. I have no any idea what's wrong. I run it on Windows 10.
Using default database location you may try this option with docker-compose:
keycloak:
image: quay.io/keycloak/keycloak:14.0.0
container_name: keycloak
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
ports:
- "8082:8080"
restart: always
volumes:
- .local/keycloak/:/opt/jboss/keycloak/standalone/data/
Found similar answer with plain docker https://stackoverflow.com/a/60554189/6916890
docker run --volume /root/keycloak/data/:/opt/jboss/keycloak/standalone/data/
In case you are using docker setup mentioned in https://www.keycloak.org/getting-started/getting-started-docker and looking for a way to persist data even if the container is killed then you can use docker volumes and mount the /opt/keycloak/data/ folder from docker container to a directory in your local machine.
The only change you need to do in the docker command mentioned in the getting started doc is add volume mount docker option using
-v /<path-in-your-local-machine>/keycloak-data/:/opt/keycloak/data/
so, the final docker run command with an example of local directory would look like:
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin \
-v /Users/amit/workspace/keycloak/keycloak-data/:/opt/keycloak/data/ \
quay.io/keycloak/keycloak:19.0.3 start-dev
Which database are you using with it? I think you need to bind the database volume as well with it to save the state.
For eg: for postgress
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
or for mysql
services:
mysql:
image: mysql:5.7
volumes:
- mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
You must specify the source of the database in the environment variables.
If you used a different service for the PostgreSQL instance for postgres, you must specify the DB_ADDR environment variable in your service.
services:
idprovider-app:
container_name: idprovider-app
build:
dockerfile: Dockerfile
context: .
environment:
DB_VENDOR: POSTGRES
# Specify hostname of the database (eg: hostname or hostname:port)
DB_ADDR: hostname:5432
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
volumes:
- keycloak-data-volume:/var/lib/keycloak/data
ports:
- "8090:8090"
- "8443:8443"
volumes:
keycloak-data-volume:
external: true
my 2 cents, worked for me with the persistent volume pointing to /opt/keycloak/data/h2, with Keycloak docker version 19.0.1 :
-v /<path-in-your-local-machine>/keycloak-data/:/opt/keycloak/data/h2
Update for version >= 17.0
To complement lazylead's answer, you need to use /opt/keycloak/data/ instead of /opt/jboss/keycloak/standalone/data/ for keycloak version >= 17.0.0
https://stackoverflow.com/a/60554189/5424025
I have a docker-compose networking issue. So i create my shared space with containers for ubuntu, tensorflow, and Rstudio, which do an excellent job in sharing the volume between them and the host, but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it. My docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
#ubuntu(16.04)
ubuntu:
image: ubuntu_base
build:
context: .
dockerfile: dockerfileBase
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
ports:
- "8081:8081"
tty: true
#tensorflow
tensorflow:
image: tensorflow_jupyter
build:
context: .
dockerfile: dockerfileTensorflow
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
- .:/notebooks
networks:
- default
ports:
- "8888:8888"
tty: true
#rstudio
rstudio:
image: rstudio1
build:
context: .
dockerfile: dockerfileRstudio1
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
environment:
- PASSWORD=test
ports:
- "8787:8787"
tty: true
volumes:
ubuntu:
tensorflow:
rstudio:
networks:
default:
driver: bridge
I am quite a docker novice, so I'm not sure about my network settings. That being said the docker inspect composetest_default (the default network created for the compose) shows the containers are connected to the network. It is my understanding that in this kind of situation I should be able to freely call one service in each one of the other containers and vice-versa:
"Containers": {
"83065ec7c84de22a1f91242b42d41b293e622528d4ef6819132325fde1d37164": {
"Name": "composetest_ubuntu_1",
"EndpointID": "0dbf6b889eb9f818cfafbe6523f020c862b2040b0162ffbcaebfbdc9395d1aa2",
"MacAddress": "02:42:c0:a8:40:04",
"IPv4Address": "192.168.64.4/20",
"IPv6Address": ""
},
"8a2e44a6d39abd246097cb9e5792a45ca25feee16c7c2e6a64fb1cee436631ff": {
"Name": "composetest_rstudio_1",
"EndpointID": "d7104ac8aaa089d4b679cc2a699ed7ab3592f4f549041fd35e5d2efe0a5d256a",
"MacAddress": "02:42:c0:a8:40:03",
"IPv4Address": "192.168.64.3/20",
"IPv6Address": ""
},
"ea51749aedb1ec28f5ba56139c5e948af90213d914630780a3a2d2ed8ec9c732": {
"Name": "composetest_tensorflow_1",
"EndpointID": "248e7b2f163cff2c1388c1c69196bea93369434d91cdedd67933c970ff160022",
"MacAddress": "02:42:c0:a8:40:02",
"IPv4Address": "192.168.64.2/20",
"IPv6Address": ""
}
A pre-history - I had tried with links: inside the docker-compose but decided to change to networks: on account of some warnings of deprecation. Was this the right way to go about it?
Docker version 18.09.1
Docker-compose version 1.17.1
but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it.
You cannot use linux programs the are in the bin path of a container from another container, but you can use any service that is designed to communicate over a network from any container in your docker compose file.
Bin path:
$ echo $PATH 127 ↵
/home/exadra37/bin:/home/exadra37/bin:/home/exadra37/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
So the programs in this paths that are not designed to communicate over a network are not usable from other containers and need to be installed in each container you need them,like python.