error while removing network: <network> id has active endpoints - docker

I am trying run docker compose down using jenkins job.
"sudo docker-compose down --remove-orphans"
I have used --remove-orphans command while using the docker-compose down.
Still it gives below error.
Removing network. abc
error while removing network: network id ************ has active endpoints
Failed command with status 1: sudo docker-compose down --remove-orphans
Below is my docker compose:
version: "3.9"
services:
abc:
image: <img>
container_name: 'abc'
hostname: abc
ports:
- "5****:5****"
- "1****:1***"
volumes:
- ~/.docker-conf/<volume>
networks:
- <network>
container-app-1:
image: <img2>
container_name: 'container-app-1'
hostname: 'container-app-1'
depends_on:
- abc
ports:
- "8085:8085"
env_file: ./.env
networks:
- <network>
networks:
<network>:
driver: bridge
name: <network>

To list your networks, run docker network ls. You should see your <network> there. Then get the containers still attached to that network with (replacing your network name at the end of the command):
docker network inspect \
--format '{{range $cid,$v := .Containers}}{{printf "%s: %s\n" $cid $v.Name}}{{end}}' \
"<network>"
For the various returned container id's, you can check why they haven't stopped (inspecting the logs, making sure they are part of the compose project, etc), or manually stop them if they aren't needed anymore with (replacing the <cid> with your container id):
docker container stop "<cid>"
Then you should be able to stop the compose project.

There is a situation when there are no containers at all, but there is an error. Then systemctl restart docker helped me

This can also happen, when you have a db instance running on separate container and using the same network. In this case, removing the db instance using the command
docker container stop "<cid>"
will stop the container. We can find the container id that is using the network by using the command provided by #BMitch
docker network inspect \
--format '{{range $cid,$v := .Containers}}{{printf "%s: %s\n" $cid $v.Name}}{{end}}' \
"<network>"
But in my case, when I did that, it also made that postgres instance "orphaned". Then i did
docker-compose up -d --remove-orphans
After that, I booted up a new db instance (postgres) using docker compose file and mapped the volume of data directory of that to the data directory of the previous db instance.
volumes:
- './.docker/postgres/:/docker-entrypoint-initdb.d/'
- ~/backup/postgress:/var/lib/postgresql/data

My Problem was solved only by restarting the docker and then deleting the container manually from the docker desktop.

Related

Docker containers refuse to communicate when running docker-compose in dind - Gitlab CI/CD

I am trying to set up some integration tests in Gitlab CI/CD - in order to run these tests, I want to reconstruct my system (several linked containers) using the Gitlab runner and docker-compose up. My system is composed of several containers that communicate with each other through mqtt, and an InfluxDB container which is queried by other containers.
I've managed to get to a point where the runner actually executes the docker-compose up and creates all the relevant containers. This is my .gitlab-ci.yml file:
image: docker:19.03
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- name: docker:19.03-dind
alias: localhost
before_script:
- docker info
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
As you can see, I am installing docker-compose, running compose up on my config yml file and then executing my integration tests from within one of the containers. When I run that final line on my local system, the integration tests run as expected; in the CI/CD environment, however, all the tests throw some variation of ConnectionRefusedError: [Errno 111] Connection refused errors. Running docker-compose ps seems to show all the relevant containers Up and healthy.
I have found that the issues stem from every time one container tries to communicate with another, through lines like self.localClient = InfluxDBClient("influxdb", 8086, database = "replay") or client.connect("mosquitto", 1883, 60). This works fine on my local docker environment as the address names resolve to the other containers that are running, but seems to be creating problems in this Docker-in-Docker setup. Does anyone have any suggestions? Do containers in this dind environment have different names?
It is also worth mentioning that this could be a problem with my docker-compose.yml file not being configured correctly to start healthy containers. docker-compose ps suggests they are up, but is there a better way to check whether they are running correctly? Here's an excerpt of my docker-compose file:
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
volumes:
- ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web:
influxnet:
internal: true
brokernet:
driver: bridge
internal: true
There are a few possibilities to why this error is occurring:
A bug on Docker 19.03-dind is known to be problematic and unable to create networks when using services without a proper TLS setup, have you correctly set up your Gitlab Runner with TLS certificates? I've noticed you are using "/certs"on your gitlab-ci.yml, did you mount your runner to share the volume where the certificates are stored?
If your Gitlab Runner is not running with privileged permissions or correctly configured to use the remote machine's network socket, you won't be able to create networks. A simple solution to unify your networks to run in a CI/CD environment is to configure your machine using this docker-compose followed by this script. (Source) It'll setup a local network where you can communicate between containers using hostnames in a network where the network driver is bridged.
There's an issue with gitlab-ci.yml as well, when you execute this part of the script:
services:
- name: docker:19.03-dind
alias: localhost
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
You're renaming your docker hostname to localhost, but you never use it, instead you type directly to use the docker and docker-compose from your image, binding them to a different network set of networks than the ones created by Gitlab automatically.
Let's try this solution (Albeit I couldn't test it right now so I apologize if it doesn't work right away):
gitlab-ci.yml
image: docker/compose:debian-1.28.5 # You should be running as a privileged Gitlab Runner
services:
- docker:dind
integration-tests:
stage: test
script:
#- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
docker-compose.yml
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
# volumes: You're mounting your volume to an ephemeral folder, which is in the CI pipeline and will be wiped afterwards (if you're using Docker-DIND)
# - ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web: # hostnames are created automatically, you don't need to specify a local setup through localhost
influxnet:
brokernet:
driver: bridge #If you're using a bridge driver, an overlay2 doesn't make sense
Both of this commands will install a Gitlab Runner as Docker containers without the hassle of having to configure them manually to allow for socket binding on your project.
(1):
docker run --detach --name gitlab-runner --restart always -v /srv/gitlab-runner/config:/etc/gitlab-runner -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest
And then (2):
docker run --rm -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register --non-interactive --description "monitoring cluster instance" --url "https://gitlab.com" --registration-token "replacethis" --executor "docker" --docker-image "docker:latest" --locked=true --docker-privileged=true --docker-volumes /var/run/docker.sock:/var/run/docker.sock
Remember to change your token on the (2) command.

docker-compose how to define container scoped network like in docker run?

Running 2 containers where mycontainer2 must use the same network stack of mycontainer1. As if the two containers were running in the same machine. Here how I try to do by using docker run with --network container:xxx
$ docker run -it --rm --name mycontainer1 -p 6666:7777 myregistry/my-container1:latest
$ docker run -it --rm --network container:mycontainer1 --name mycontainer2 myregistry/my-container2:latest
I tried to replicate this behavior using docker-compose instead. But the networks: definition of docker-compose.yaml doesn't indicate something equivalent to the --network container:xxx option of docker run. Is it possible in docker-compose to configure two containers to use the same network stack?
This is a network_mode: setting.
version: '3.8'
services:
mycontainer1:
image: myregistry/my-container1:latest
ports: ['6666:7777']
mycontainer2:
image: myregistry/my-container2:latest
network_mode: service:mycontainer1 # <---
Since Compose will generally pick its own container names, this service:name form uses the container matching the named Compose service. (If you override container_name: then you can also use container:mycontainer1 the same way you did with docker run.)
Creating an external network and use it inside docker-compose YAML manifest might help. Here is how you do it.
version: '3.7'
networks:
default:
external:
name: an-external-network
services:
my-container1:
...
my-container2:
...
Note: use docker network create command to create an-external-network before running docker-compose up command.

Executing docker with php and then use other container in command

I'm using docker for php and another one for sql. Also I have a makefile to run commands in a instance of this container. This is the entry I use for command execution, and I would like to use sql container I have.
command:
docker run --rm \
--volume=${PWD}/code:/code \
--volume=${PWD}/json:/json:rw \
--volume=${PWD}/file:/file:rw \
own_php:latest \
time php /code/public/index_hex.php ${page}
If I try to execute this command from the make file, I get the following error.
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed:
Name does not resolve
This is the docker-compose I have in my project
version: '3'
services:
sql:
image: mariadb
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./init-db:/docker-entrypoint-initdb.d
- ./.mysql-data:/var/lib/mysql/
crawler:
build:
context: './docker-base'
depends_on:
- sql
volumes:
- ./json:/json:rw
- ./file:/file:rw
- ./code:/code
But if I run the container using my docker-composer, and I enter inside the container the command executes well.
It is possible for docker run --rm to use another container?
Docker Compose creates a network for each compose file and you have to attach your manually docker run container to that network to be able to reach other containers on it. The network will usually be named something like directoryname_default, based on the name of the directory holding the compose file, and it will show up in the docker network ls listing.
You should run something like
docker run --rm --net directoryname_default ...

Docker DNS with Multiple Projects Using the Same Network

I have the following docker-compose.yml file:
version: '3'
services:
frontend:
image: alpine
command: tail -f /dev/null
networks:
- shared
- default
backend:
image: alpine
command: tail -f /dev/null
networks:
- shared
- default
networks:
shared:
external: true
Based on the file from above I create two projects which use the same network (shared) and the same service names (frontend and backend):
docker-compose -p foo up -d
docker-compose -p bar up -d
Does the DNS of docker make sure that docker-compose -p foo exec frontend ping backend only resolves to the backend container in project foo and vice versa for project bar?
According to https://github.com/docker/compose/issues/4645, the resolve order in this case in non deterministic. Since the network is being converted to unordered dict in golang, the order is not preserved. Which implies https://github.com/docker/libnetwork/blob/master/sandbox.go#L593 the order of endpoints being queried don't match the order of network.
The solution is to define https://docs.docker.com/compose/compose-file/compose-file-v2/#priority if using docker-compose version 2. Or fully qualified dns name as service.network such as backend.foo_default or backend.shared.
Based on your setup I have used nslookup to find out whether the DNS resolution is isolated or not.
$ docker-compose -p foo exec frontend nslookup backend
Name: backend
Address 1: 172.19.0.2 foo_backend_1.shared
Address 2: 172.19.0.4 bar_backend_1.shared
As you can see from the output above, backend resolves to both of the containers.
If you use docker swarm you can qualify hostnames with the service name to disambiguate containers. But I don't believe docker-compose does this.

How can I add hostnames to a container on the same docker network?

Suppose I have a docker compose file with two containers. Both reference each other in their /etc/hosts file. Container A has a reference for container B and vice versa. And all of this happens automatically. Now I want to add one or more hostnames to B in A's hosts file. How can I go about doing this? Is there a special way I can achieve this in Docker Compose?
Example:
172.0.10.166 service-b my-custom-hostname
Yes. In your compose file, you can specify network aliases.
services:
db:
networks:
default:
aliases:
- database
- postgres
In this example, the db service could be reached by other containers on the default network using db, database, or postgres.
You can also add aliases to running containers using the docker network connect command with the --alias= option.
Docker compose has an extra_hosts feature that allows additional entries to be added to the container's host file.
Example
docker-compose.yml
web1:
image: tomcat:8.0
ports:
- 8081:8080
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
web2:
image: tomcat:8.0
ports:
- 8082:8080
web3:
image: tomcat:8.0
ports:
- 8083:8080
Demonstrate host file entries
Run docker compose with the new docker 1.9 networking feature:
$ docker-compose --x-networking up -d
Starting tmp_web1_1
Starting tmp_web2_1
Starting tmp_web3_1
and look at the hosts file in the first container. Shows the other containers, plus the additional custom entries:
$ docker exec tmp_web1_1 cat /etc/hosts
..
172.18.0.4 web1
172.18.0.2 tmp_web2_1
172.18.0.3 tmp_web3_1
50.31.209.229 otherhost
162.242.195.82 somehost
If I understand your question correctly, you can pass a host name referenced in your host's /etc/hosts file via --add-host flag :
$ docker run ... --add-host="droid"
Your host's /etc/hosts would need the following entry:
xx.xx.xx.xx droid
Of course, xx.xx.xx.xx will need to be reachable from inside the container you just started using the 'docker run' command. You can have one or more --add-host="xyz".
More details about --add-host here:
http://docs.docker.com/v1.8/reference/commandline/run/

Resources