I am trying to communicate from one Docker container running on my Win10 laptop with another container also running locally.
I start up the target container, and I see the following network:
docker network ls
NETWORK ID NAME DRIVER SCOPE
...
f85b7c89dc30 w3virtualservicew3id_w3-virtual-service-w3id bridge
I then start up my calling container with docker-compose up. I can then successfully connect my other container to the network via the command line:
docker network connect w3virtualservicew3id_w3-virtual-service-w3id w3vacationatibmservice_rest_1
However, I can't connect to that same network by adding it to the network section of my docker-compose.yml file for the calling container. I was under the impression that they both basically did the same thing:
networks:
- w3_vacation-at-ibm_service
- w3virtualservicew3id_w3-virtual-service-w3id
The error message tells me it can't find the network, which is not true, since I can connect via the command line, so I know it's really there and running:
ERROR: Service "rest" uses an undefined network "w3virtualservicew3id_w3-virtual-service-w3id"
Anyone have any idea what I'm missing?
The network you define under your service is expected to be defined inside the global networks section (same thing for volumes):
version 'X.Y'
services:
calling_container:
networks:
- your_network
networks:
your_network:
external: true
Do you really have to use a separate compose yml for your calling container? If both of your container interacts with each other, you should add them both to one and the same compose yml. In this case, you don't have to specifiy any network, they will automatically be inside the same network.
Related
I have a Jaeger running in a docker container in my local machine.
I've created a sample app which sends trace data to Jaeger. When running from the IDE, the data is sent perfectly.
I've containerized my app, and now I'm deploying it as a container, but the communication only works when I use --link jaeger to link both containers (expected).
My question is:
Is there a way of adding the --link parameter within my Dockerfile, so then I don't need to specify it when running the docker run command?
There is no possibility of doing it in the Dockerfile if you want to keep two separate image. How should you know in advance the name/id of the container you're going to link ?
Below are two solutions :
Use Docker compose. This way, Docker will automatically link all the containers together
Create a bridge network and add all the container you want to link inside. This way, you'll have name resolution and you'll be able to contact each container using its name
I recommend you using netwoking, by creating:
docker network create [OPTIONS] NETWORK
and then run with --network="network"
using docker-compose with network and link to each other
example:
version: '3'
services:
jaeger:
network:
-network1
other_container:
network:
-network1
networks:
network1:
external: true
Is there any way to make a docker container that is accessing to all docker networks at the same time ? The idea is that I have 2 docker networks.
Let's say that they are called demo1 and demo2.
I have another docker container (called Front) that should reach demo1 and demo2 at the same time.
I can do that by declaring external networks in my docker-compose file.
However, I want to be able to declare demo3 and attach the Front container to it "dynamically", without modifying the compose file of the container and if it's possible, without restarting it.
So, I am trying to find an architecture that makes my container Front connect to any added docker network dynamically.
I can create a script in a crontab, but the idea is to do it properly.
The need is to get a common container, which can reach any other container.
In a docker compose syntaxe, I image something like this:
networks:
all:
name: '*'
external: true
Is it possible ? How ?
Regards
I guess what you need is Connect a running container to a network:
Examples
Connect a running container to a network
$ docker network connect multi-host-network container1
Just find the new network name and connect your Front container to this network out of composefile.
I have the following setup in my computer:
One docker-machine set-up for the containers of my Project A. I have my docker-compose.yml file, describing which containers have to be build, the volumes to mount and so on, and the Dockerfile for each container.
Another docker-machineset-up for the containers of my Project B, with its docker-compose.yml and Dockerfiles.
I now want to do a NFS share between a container in my project A (let's call it container 1) and another container in my project B (container 2).
I was checking links like this, but, as far as I understand it, that's for containers in the same network. In this case, my container 1 and container 2 are not in the same network, and they are in different machines.
I haven't specified any networking option when running docker-machine or in my docker-compose.yml files (apart from exposing the ports that my apps use).
How can I do an NFS share between those 2 containers?
The 'docker-compose up' command creates a network by name [projectname]_default" and all the services specified in the docker-compose.yml file will be mapped to the network that got created.
For example, suppose your app is in a directory called myapp, and your docker-compose.yml looks like this:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
When you run docker-compose up, the following happens:
1) A network called myapp_default is created.
2) A container is created using web’s configuration. It joins the network myapp_default under the name web.
3) A container is created using db’s configuration. It joins the network myapp_default under the name db.
and If you want other service to make use of the existing docker network that is created then you need to define that using 'external' option
Use a pre-existing network
If you want your containers to join a pre-existing network, use the external option:
networks:
default:
external:
name: my-pre-existing-network
Instead of attempting to create a network called [projectname]_default, Compose looks for a network called my-pre-existing-network and connect your app’s containers to it.
source: https://docs.docker.com/compose/networking/#use-a-pre-existing-network
To start, I am more familiar running Docker through Portainer than I am with doing it through the console.
What I'm Doing:
Currently, I'm running Mopidy through a container, which is being accessed by other machines through the default Mopidy port. In another container, I am running a Slack bot using the Limbo repo as a base. Both of them are running on Alpine Linux.
What I Need:
What I want to do is for my Slack bot to be able to call MPC commands, such as muting the volume, etc. This is where I am stuck. What is the best way for this to work
What I've tried:
I could ssh into the other container to send a command, but it doesn't make sense to do this since they're both running on the same server machine.
The best way to connect a bunch of containers is to define a service stack using docker-compose.yml file and launch all of them using docker-compose up. This way all the containers will be connected via single user-defined bridge network which will make all their ports accessible to each other without you explicitly publishing them. It will also allow the containers to discover each other by the service name via DNS-resolution.
Example of docker-compose.yml:
version: "3"
services:
service1:
image: image1
ports:
# the following only necessary to access port from host machine
- "host_port:container_port"
service2:
image: image2
In the above example any application in the service2 container can reach some port on service1 just by using service2:port address.
I tried to setup an nginx-proxy container to access my other containers via subdomains on port 80 instead of special ports. As you can guess, I could not get it to work.
I'm kind of new to docker itself and found that it's more comfortable for me to write docker-compose.yml files so I don't have to constantly write long docker run ... commands. I thought there's no difference in how you start the containers, either with docker or docker-compose. However, one difference I noticed is that starting the container with docker does not create any new networks, but with docker-compose there will be a xxx_default network afterwards.
I read that containers on different networks cannot access each other and maybe that might be the reason why the nginx-proxy is not forwarding the requests to the other containers. However, I was unable to find a way to configure my docker-compose.yml file to not create any new networks, but instead join the default bridge network like docker run does.
I tried the following, but it resulted in an error saying that I cannot join system networks like this:
networks:
default:
external:
name: bridge
I also tried network_mode: bridge, but that didn't seem to make any difference.
How do I have to write the docker-compose.yml file to not create a new network, or is that not possible at all?
Bonus question: Are there any other differences between docker and docker-compose that I should know of?
Adding network_mode: bridge to each service in your docker-compose.yml will stop compose from creating a network.
If any service is not configured with this bridge (or host), a network will be created.
Tested and confirmed with:
version: "2.1"
services:
app:
image: ubuntu:latest
network_mode: bridge