Docker common network - docker

Is there any way to make a docker container that is accessing to all docker networks at the same time ? The idea is that I have 2 docker networks.
Let's say that they are called demo1 and demo2.
I have another docker container (called Front) that should reach demo1 and demo2 at the same time.
I can do that by declaring external networks in my docker-compose file.
However, I want to be able to declare demo3 and attach the Front container to it "dynamically", without modifying the compose file of the container and if it's possible, without restarting it.
So, I am trying to find an architecture that makes my container Front connect to any added docker network dynamically.
I can create a script in a crontab, but the idea is to do it properly.
The need is to get a common container, which can reach any other container.
In a docker compose syntaxe, I image something like this:
networks:
all:
name: '*'
external: true
Is it possible ? How ?
Regards

I guess what you need is Connect a running container to a network:
Examples
Connect a running container to a network
$ docker network connect multi-host-network container1
Just find the new network name and connect your Front container to this network out of composefile.

Related

Link docker containers in the Dockerfile

I have a Jaeger running in a docker container in my local machine.
I've created a sample app which sends trace data to Jaeger. When running from the IDE, the data is sent perfectly.
I've containerized my app, and now I'm deploying it as a container, but the communication only works when I use --link jaeger to link both containers (expected).
My question is:
Is there a way of adding the --link parameter within my Dockerfile, so then I don't need to specify it when running the docker run command?
There is no possibility of doing it in the Dockerfile if you want to keep two separate image. How should you know in advance the name/id of the container you're going to link ?
Below are two solutions :
Use Docker compose. This way, Docker will automatically link all the containers together
Create a bridge network and add all the container you want to link inside. This way, you'll have name resolution and you'll be able to contact each container using its name
I recommend you using netwoking, by creating:
docker network create [OPTIONS] NETWORK
and then run with --network="network"
using docker-compose with network and link to each other
example:
version: '3'
services:
jaeger:
network:
-network1
other_container:
network:
-network1
networks:
network1:
external: true

Can't find Docker Compose network entry

I am trying to communicate from one Docker container running on my Win10 laptop with another container also running locally.
I start up the target container, and I see the following network:
docker network ls
NETWORK ID NAME DRIVER SCOPE
...
f85b7c89dc30 w3virtualservicew3id_w3-virtual-service-w3id bridge
I then start up my calling container with docker-compose up. I can then successfully connect my other container to the network via the command line:
docker network connect w3virtualservicew3id_w3-virtual-service-w3id w3vacationatibmservice_rest_1
However, I can't connect to that same network by adding it to the network section of my docker-compose.yml file for the calling container. I was under the impression that they both basically did the same thing:
networks:
- w3_vacation-at-ibm_service
- w3virtualservicew3id_w3-virtual-service-w3id
The error message tells me it can't find the network, which is not true, since I can connect via the command line, so I know it's really there and running:
ERROR: Service "rest" uses an undefined network "w3virtualservicew3id_w3-virtual-service-w3id"
Anyone have any idea what I'm missing?
The network you define under your service is expected to be defined inside the global networks section (same thing for volumes):
version 'X.Y'
services:
calling_container:
networks:
- your_network
networks:
your_network:
external: true
Do you really have to use a separate compose yml for your calling container? If both of your container interacts with each other, you should add them both to one and the same compose yml. In this case, you don't have to specifiy any network, they will automatically be inside the same network.

Is there a way to add a hostname to an EXISTING docker container?

I have some containers that communicate via their IP from the network docker.
I can use the option -h or --hostname when running a new container but I want to set the hostname for existing container.
Is it possible?
One way is to create network and add different container in this network.
When adding container in the network, you can use the --alias option of docker network. Like this:
Create a network:
docker network create <my-network-name>
Add containers in the network:
docker network connect --alias <hostname-container-1> <my-network-name> <container-1>
docker network connect --alias <hostname-container-2> <my-network-name> <container-2>
docker network connect --alias <hostname-container-3> <my-network-name> <container-3>
Enjoy.
So each container can see other container by the alias (the alias is used as hostname).
Generally, you would need to stop/restart a container, in order to run it again with -h (--hostname) (unless you used --net=host)
If you cannot stop the container, you can try and (in an attached bash session) edit its /etc/hostname.
The hostname is immutable once the container is created (although technically you can modify /etc/hostname).
As suggested in another answer, you cannot change the hostname by stopping or restarting the container. There are not Docker engine client parameters for the start command that affect hostname. That wouldn't make sense anyway as starting a container simply launches the ENTRYPOINT process in a container filesystem that has already been created (i.e. /etc/hostname has already been written).
It is possible to synchronize the container hostname with the host by using the --uts=host parameter when the container is created. This shares the UTS namespace. I would not recommend --net=host unless you also want to share the host network devices (i.e. bypass the Docker bridge).

RabbitMQ cluster by docker-compose on different hosts and different projects

I have 3 projects, that deploys on different hosts. Every project have it's own RabbitMQ container. But I need to create cluster with this 3 hosts, using the same vhost, but different user/login pair.
I was tried Swarm and overlay networks, but swarm is aimed to run solo containers and with compose it doesn't work. Also, I was tried docker-compose bundle, but this is not work as expected :(
I assumed that it would work something like this:
1) On manager node I create overlay network
2) In every compose file I extend networks config for rabbitmq container with my overlay network.
3) They work as expected and I don't publish to Internet rabbitmq port.
Any idea, how can I do this?
Your approach is right, but Docker Compose doesn't work with Swarm Mode at the moment. Compose just runs docker commands, so you could script up what you want instead. For each project you'd have a script like this:
docker network create -d overlay app1-net
docker service create --network app1-net --name rabbit-app1 rabbitmq:3
docker service create --network app1-net --name app1 your-app-1-image
...
When you run all three scripts on the manager, you'll have three networks, each network will have its own RabbitMQ service (just 1 container by default, use --replicas to run more than one). Within the network other services can reach the message queue by the DNS name rabbit-appX. You don't need to publish any ports, so Rabbit is not accessible outside of the Docker network.

How to share host network bridge when using docker in docker

I'm using the https://github.com/jpetazzo/dind docker image to have docker in docker. When starting docker containers inside the parent docker, is it possible to use the bridge of the parent docker so I can share the network between the containers inside the docker container and the parent docker container?
What I want to do is to access the containers inside the parent docker container from the host directly by IP to assign domain names to them.
UPDATE -> Main Idea
I'm upgrading a free online Java compiler to allow users to run any program using docker. So I'm using the dind (docker in docker image) to launch a main container that have inside a Java program that receive requests and launch docker containers inside of it.
So what I want to do is to give the users the option to run programs that expose a port and let them access their containers using a subdomain.
So graphically I have this hierarchy
Internet -> My Host -> Main Docker Container -> User Docker Container 1
-> User Docker Container 2
-> User Docker Container n
And what I want to do is to give the user a subdomain name to access his "User Docker Container" for example: www.user_25.compiler1.browxy.com
So he can have a program that expose a port in his "User Docker Container" and can access it using the subdomain www.user_25.compiler1.browxy.com
What confuses me is that to access the "User Docker Container" I need to access before the Main Docker Container. I'm trying to find a way to access the "User Docker Container" directly, so I thought that if the User Docker Container and the Main Docker container can share the same network I can access the User Docker Container directly from the host and assign a domain name to the "User Docker Container" IP updating the /etc/hosts file on the host.
Thanks a lot for any advice or suggestion :)
Finally I took many ideas that larsks gave me and this is what I did
Start docker in docker container with a name (--name compiler)
Execute this command in the host -> sudo route add -net 10.0.0.0 gw docker inspect --format '{{ .NetworkSettings.IPAddress }}' compiler netmask 255.255.255.0
For this to work I added a custom bridge in the docker in docker container that ensure that the ip range is 10.0.0.0/24
Now I can ping containers created inside the docker in docker container from the host
To have name resolution I installed docker-dns as larsks suggested into the docker in docker container and added the IP of it to /etc/resolv.conf in the host
The result is that from the host I can access containers by name that are created inside the docker in docker container.
One possible updgrade thing that I'd like to have is to configure everything with docker and don't add custom stuff into the host but by now I don't know how to do that and I can live with this solution
If you run your "Main docker container" with --net=host, then your configuration simplifies to:
Internet -> Host -> User Docker Container 1
-> User Docker Container 2
-> User Docker Container n
Although you probably want to use a bridge other than docker0 for the child containers (e.g., create a new bridge docker1, and start your dind Docker daemon with -b docker1).
If two users were to attempt to publish a service on the same port at the same ip address, then yes, you would have port conflicts. There are a few ways of working around this:
If you can support multiple public ip addresses on your host, then you can "assign" (in quotes because this would not be automatic) one to each container. Instead of running docker run -p 80:80 ..., you would need to make the bind ip explicit, like docker run -p 80:80:1.2.3.4. This requires people to "play nice"; that is, there is nothing to prevent someone from either forgetting to specify a bind address or from specifying the wrong address.
If you are explicitly running web services, then you may be able to use some sort of front-end proxy to map subdomain names to containers using name-based virtual host. There are several components to this process, and making it automated would probably require a little work. Doing it manually is comparatively easy (just update /etc/hosts, for example), but is fragile because when a container is restarted it will have a new ip address. Something like a dynamic dns service can help with this.
These are mostly suggestions more than solutions, but let me know if you would like more details. There are probably other ways of cracking this particular nut, so hopefully someone else will chime in.

Resources