docker-compose down - network is external, skipping - docker

I'm trying bring down all services for an external network defined in my docker-compose file (using version 2).
When I try to do a docker-compose down, I get a message stating,
Network 'your_network' is external, skipping
Is there a way, using docker-compose, to stop and remove all the containers for a user-defined or external network?

I've encountered the same error. docker-compose can only stop the containers started by docker-compose.
In my case: the containers that I wanted to stop were started by docker run.
So I stopped the containers one by one. then started them with the docker-compose.yml
Not sure if you are the same case.

This isn't an error. You have a network declared as "external", mostly meaning that it may be used by other services or other docker-compose files. So when you stop those services, the network gets "skipped", because the network is shared among all services that reference it, and it would create an error to try to delete the external network.

Docker error messages (and in general) sucks as always.
Originally I had multiple services that used a custom network, as shown here:
version: '3'
networks:
mynet:
external: true
services:
nexus-repository:
image: sonatype/nexus3
ports:
- '8082:8081'
networks:
- mynet
volumes:
- '/nexus-data:/nexus-data'
To remove the containers I tried:
sudo docker-compose down => NOPE
ssudo docker network remove mynet => NOPE
sudo docker-compose rm -sfv nexus-repository => NOPE
Nothing worked until I completely removed all references to the exernal network.
Solution
services:
nexus-repository:
image: sonatype/nexus3
ports:
- '8082:8081'
volumes:
- '/nexus-data:/nexus-data'
No more:
Network 'mynet' is external, skipping
And no more containers !

In my case, I had started my containers using a docker-compose file indeed yet through VSCode's remote container extension. If that's your case, you can stop your containers using VSCode's Docker extension (right click on your container group -> Docker Compose Down)

Related

Error in docker: network "path" declared as external, but could not be found

I am new to docker. I have been assigned with a task that uses Docker container for development. I followed the tutorial for installing the Docker and the containers on Windows 10, but I have the following error: network remaxmdcrm_remaxmd-network declared as external, but could not be found
The steps I've done so far are:
Cloned the repository from GitHub.
Installed Docker on my laptop.
Once I installed Docker, I went in the root of my project and ran the following command. docker-compose build -d -t docker-compose.yml - docker-compose.yml being the file in the root dir.
I opened Docker app and I ran the images created.
I ran the command docker-compose up. When I ran this command, the error I specified at the beginning appears. network remaxmdcrm_remaxmd-network declared as external, but could not be found
docker-compose.yml
services:
ui:
build:
context: .
dockerfile: Dockerfile.development
volumes:
- .:/app
ports:
- "5000:5000"
restart: unless-stopped
networks:
- remaxmdcrm_remaxmd-network
redis:
image: 'redis:alpine'
networks:
- remaxmdcrm_remaxmd-network
networks:
remaxmdcrm_remaxmd-network:
external: true
Ran: docker ps -a
ID IMAGE
5e6cf997487c remaxmd-site_ui:latest
451009e0a2a6 redis:alpine
85e7cde67d05 docmer-compose.yml:latest
I might do something wrong here. Can somebody help me? I much appreciate your time!
I solved the issue, finally. The issue came from the fact that I had in docker-compose.yml remaxmdcrm_remaxmd-network declared as external. The external network was not created during installation, thus I needed to create a bridging network.
I ran the command docker network create "name_of_network"
For further details, here is the full documentation this
You can see docker network ls and uses bridge network
You shouldn't have to run a command to create the network prior to running docker compose, Docker should create the network if it doesn't exist. The reason you're getting this error is because you're declaring the network as external, which means that Docker expects it to already exist. If you need a new one, remove external: true

How set up test containers for paraller runwith docker compose?

Here is docker-compose file:
app:
image: myimage
depends_on:
- nsqd
- localstack
command: ["run.sh"]
environment:
- "DYNAMODB=http://localstack:4569"
ports:
- 8080:8080
nsqd:
image: nsqio/nsq
command: /run
ports:
- "4150:4150"
- "4151:4151"
localstack:
image: localstack/localstack:latest
ports:
- 4569:4569
environment:
SERVICES: dynamodb
DATA_DIR: /tmp/localstack/data
HOSTNAME: localstack
This compose file is run in java junit test before any test method is run :
#Before
public void setUp() throws Exception {
new DockerComposeContainer(new File("docker-compose.yaml"))
.withExposedService("nsqd", 4150, Wait.forListeningPort())
.withExposedService("localstack", 4569, Wait.forListeningPort())
.withExposedService("app", 8080, Wait.forListeningPort())
.start();
}
When all test methods run one by one there are no problems at all. But when I try to run more than 2 test same time I
got errors like that:
ERROR: for localstack Cannot start service localstack: driver failed programming external connectivity on endpoint hwfdrbmwpwn1_localstack_1 (e33d2a3098e74b1b8d87e3e595d9d9504ccddd4fe9c0605b20ebd3f22f50daa5): Bind for 0.0.0.0:4569 failed: port is already allocated
ERROR: for nsqlookupd Cannot start service nsqlookupd: driver failed programming external connectivity on endpoint hwfdrbmwpwn1_nsqlookupd_1 (fe62cec02a23a184d65b3f02776a14d77fdfbe639645ea0a11e07e8f11010e37): Bind for 0.0.0.0:4161 failed: port is already allocated
And those port differs from withExposedService function. From other side all service from compose file started in isolated network
so there are should not be any conflicts but they exist. Can any bpody explain what is going on with ports?
What is the additional config should be provided to testcontainers to run docker-compose services multiple times same time?
The port defined with withExposedService is from the internal view of a container. Testcontainers will bind that port to a random external port. Have a read here:
https://www.testcontainers.org/features/networking
Do you also stop your docker compose containers before each test method?
I also would suggest to remove the port mapping from your docker compose file as it is not necessary with testcontainers:
Note that it is not necessary to define ports to be exposed in the YAML file; this would inhibit reuse/inclusion of the file in other contexts.
Taken from: https://www.testcontainers.org/modules/docker_compose/
If I understood your setup correctly you want start and stop your docker-compose containers for each test and do so with potentially several different docker-compose-files in different tests (or different tests with the same file) at the same time.
There exists an alternative library, Docker-Compose-Rule of Palantir!.
There is actually a collaboration going on between the two (testContainers and Palantir) since testContainers is far more generic, but Palantir library was more in depth using docker-compose.
The Collaboration started in 2018 but for now the library is still maintained so it might still has a specialization advantage which solves your problem.

Running an executable inside a docker container from another container

I am trying to run an executable file from another docker container while already inside a docker container. Is this possible?
version: '3.7'
services:
py:
build: .
tty: true
networks:
- dataload
volumes:
- './src:/app'
- '~/.ssh:/ssh'
winexe:
build:
context: ./winexe
dockerfile: Dockerfile
networks:
- dataload
ports:
- '8001:8001'
volumes:
- '~/path/to/winexe:/usr/bin/winexe'
- '~/.ssh:/ssh'
depends_on:
- py
networks:
dataload:
driver: bridge
I am trying to access Winexe from 'py'
Assuming you mean running another Docker container from inside a container, this can be done in several ways:
Install the docker command inside your container and:
Contact the hosting Docker instance over TCP/IP. For this you will have to have exposed the Docker host to the network, which is neither default nor recommended.
Map the docker socket (usually /var/run/docker.sock) in to your container using a volume. This will allow the docker command inside the container to contact the host instance directly.
Be aware this essentially gives the container root level access to the host! I'm sure there are many more ways to do the same, but approach number 2 is the one I see most often.
If you mean to run another executable inside another - already running - Docker container, you can do that in the above way as well by using docker exec or run some kind of daemon in the second container that accepts commands and runs the required command for you.
So you need to think of your containers as if they were two separate computers, or servers, and they can interact accordingly.
Happily, docker-compose gives you a url you can use to communicate between the containers. In the case of your docker-compose file, you could access the winexe container from your py container like so:
http://winexe:8001 // or ws://winexe:8001 or postgres://winexe:8001 (you get the idea)
(I've used port 8001 here because that's the port you've made available for winexe – I have no idea if it could be used for this).
So now what you need is something in your winexe container than listens to that signal and sends a useful reply (like a browser sending an ajax call to a server)
Learn more here:
https://docs.docker.com/compose/networking/

Docker compose - external links fail after successfull restart

the situation is this:
I have three different dockers compose files for three different projects: a frontend, a middleware, and a backend. FE is Ember, middleware and backend spring (boot). Which should not matter here though. Middleware uses an external_link to the backend, and frontend (UI) is using an external_link to middleware.
When I start with a clean docker (docker stop $(docker ps -aq), docker rm $(docker ps -aq)), everything works fine: I start the backend with docker-compose up, then middleware, then frontend. Everything is nice all external links do work (also running Cypress e2e tests on this setup - works fine).
Now, when I change something in the middleware, rebuild the image, stop the container (control+c) and restart it using docker-compose up, and then try to restart the frontend (control + c and then docker-compose up), docker will tell me:
Starting UI ... error
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: Encountered errors while bringing up the project.
Now what irritates me:
where is the "32f2db8e96a1" coming from? The middleware container name is set to "middleware", which is also used in the external link of the UI, and works fine for every clean startup (meaning, using docker rm "-all" before). Also, docker ps shows me that a container for the middleware is actually running.
Unfortunately, I cannot post the compose files here, but I am willing to add any info needed.
Running on Docker version 18.09.0, build 4d60db4
Ubuntu 18.04.1 LTS
I would like to restart any of these containers without a broken external link. How do I achieve this?
Since you guys take time for me, I took time to clear out two of the compose. This is the UI/frontend one:
files:
version: '2.1'
services:
ui:
container_name: x-ui
build:
dockerfile: Dockerfile
context: .
image: "xxx/ui:latest"
external_links:
- "middleware:backend"
ports:
- "127.0.0.1:4200:80"
network_mode: bridge
This is the middleware:
version: '2.1'
services:
middleware:
container_name: x-middleware
image: xxx/middleware:latest
build:
dockerfile: src/main/docker/middleware/Dockerfile
context: .
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9003:9000"
external_links:
- "api"
network_mode: "bridge"
The "api" one is essentially the same as middleware.
Please note: I removed volumes and environment. Also I renamed, so that the error message names will not fit perfectly. Please note the naming schema is the same: service name goes like "middleware", container name uses a prefix "x-middleware".

docker rabbitmq how to expose port and reuse container with a docker file

Hi I am finding it very confusing how I can create a docker file that would run a rabbitmq container, where I can expose the port so I can navigate to the management console via localhost and a port number.
I see someone has provided this dockerfile example, but unsure how to run it?
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
I have got rabbit working locally fine, but everyone tells me docker is the future, at this rate I dont get it.
Does the above look like a valid way to run a rabbitmq container? where can I find a full understandable example?
Do I need a docker file or am I misunderstanding it?
How can I specify the port? in the example above what are first numbers 5672:5672 and what are the last ones?
How can I be sure that when I run the container again, say after a machine restart that I get the same container?
Many thanks
Andrew
Docker-compose
What you posted is not a Dockerfile. It is a docker-compose file.
To run that, you need to
1) Create a file called docker-compose.yml and paste the following inside:
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
2) Download docker-compose (https://docs.docker.com/compose/install/)
3) (Re-)start Docker.
4) On a console run:
cd <location of docker-compose.yml>
docker-compose up
Do I need a docker file or am I misunderstanding it?
You have a docker-compose file. The rabbitmq:3-management is the Docker image built using the RabbitMQ Dockerfile (which you don't need. The image will be downloaded the first time you run docker-compose up.
How can I specify the port? In the example above what are the first numbers 5672:5672 and what are the last ones?
"5672:5672" specifies the port of the queue.
"15672:15672" specifies the port of the management plugin.
The numbers on the left-hand-side are the ports you can access from outside of the container. So, if you want to work with different ports, change the ones on the left. The right ones are defined internally.
This means you can access the management plugin after at http:\\localhost:15672 (or more generically http:\\<host-ip>:<port exposed linked to 15672>).
You can see more info on the RabbitMQ Image on the Docker Hub.
How can I be sure that when I rerun the container, say after a machine restart that I get the same container?
I assume you want the same container because you want to persist the data. You can use docker-compose stop restart your machine, then run docker-compose start. Then the same container is used. However, if the container is ever deleted you lose the data inside it.
That is why you are using Volumes. The data collected in your container gets also stored in your host machine. So, if you remove your container and start a new one, the data is still there because it was stored in the host machine.

Resources