I have 10 different host each host has many docker containers, which each few container managed by a docker-compose, containers within the docker-compose can communicate with each other, even containers with in the host can communicate with each other although they are from different docker-compose, but now I want to have ability to reach container which is hosted in different machine within the docker host, other than DNS is there any other way ?
docker-compose is supposed to work only within one host.
If you want your docker containers run on different hosts you should consider using Kubernetes or Docker swarm.
Related
All,
I've searched this high and low but was not able to find a reliable answer. The question may be simple for some Pro's but please help me with this...
We have a situation where we need Jenkins to be able to access and build within Docker containers. The target Docker containers are built and instantiated with a separate docker-compose. What would be the best way of connecting Jenkins with Docker containers in each scenario's as below?
Scenario 1 : Jenkins is setup on host machine it self. 2 Docker containers instantiated using their own docker-compose file. How can Jenkins connect to the containers in this situation? Host cannot ping Docker containers since both are on different networks (Host on Physical and Docker containers on docker DNS) hence maybe no SSH as well?
Scenario 2 : We prefer Jenkins to be in its own container (with its own docker-compose) so as we can replicate the same on to other environments. How can Jenkins connect to the containers in this situation? Jenkins container cannot ping Docker containers even though I use the same network in both docker-compose files. Instead it creates additional bridge network on its own e.g. from 2nd scenario below, if I have network-01 in Docker-Compose 01 and if I mention the same in Docker-Compose 2, docker creates additional network for Compose 2. As a result, I cannot ping the Node/Mongo containers from the Jenkins container (so I guess no SSH either).
Note 1 : I'm exposing 22 on both docker images i.e. Node & Mongo...
Note 2 : Our current setup has Jenkins on the host machine with exposed docker volumes from the container to the host. Is this preferred approach?
Am I missing the big elephant in the room or the solution is complicated (should'nt be!)?
I'm deploying a stack of services through the command:
docker stack deploy -c <docker-compose.yml> <stack-name>
And I'm mapping ports of one of these services on docker compose with ports: 8000:8000.
The network driver being used is overlay.
I can access these services via localhost:8000, via Peers IP(?).
When I inspect the network created, I can see the local IPs of each container (for instance, 10.0.1.2). But Where is the external IP of container (the one like 172.0. ...) ?
I am running these docker container on a virtual machine ubuntu.
How can I access the services running on containers from other nodes running on other networks? Isn't possible to access via hostIP:port?
If so, how do I get the host IP? When I do docker-machine IP I get "host is not running".
[EDIT: I wasn't doing port mapping between the host and the VM in virtualbox. Now it works!]
Whats the best way to communicate between containers on the same swarm?
Thanks
Whats the best way to communicate between containers on the same swarm? Through name discovery?
In general if you communicate between containers you should use the container/service name.
And for your other problem you probably wan't a reverse proxy like nginx or traefik.
If you tell docker-compose to scale a service, and do NOT expose its ports,
docker-compose scale dataservice=2
There will be two IPs in the network that the dns name dataservice will resolve to. So, services that reach it by hostname will load balance.
I would also like to do this to the edge proxy as well. The point would be that
docker-compose scale edgeproxy=2
Would cause edgeproxy to resolve to one of 2 possible IP Addresses.
But the semantics of exposing ports is wrong for this. If I expose:
8443:8443
Then it will try to bind each edgeproxy to be bound to host 8443. What I want is more like:
0.0.0.0:8443:edgeproxy:8443
Where when you try to come into the docker network via host 8443, it randomly selects an edgeproxy:8443 IP to bind the incoming TCP connection to.
Is there an alternative to just do a port-forward? I want a port that can get me in to talk to any ip that will resolve as edgeproxy.
This is provided by swarm mode. You can enable a single node swarm cluster with:
docker swarm init
And then deploy your compose file as a stack with:
docker stack deploy -c docker-compose.yml $stack_name
There are quite a few differences from docker compose including:
Swarm doesn't build images
You manage the target state with docker service commands, trying to stop a container with docker stop won't work since swarm will restart it
The compose file needs to be in a v3 syntax
Networks will be an overlay network, and not attachable by containers outside of swarm, by default
One of the main changes is that exposed ports are published on an ingress network managed by swarm mode, and connections are round robin load balanced to your containers. You can also define a replica count inside the compose file, eliminating the need to run a scale command.
See more at: https://docs.docker.com/engine/swarm/
I have two instances of keycloak running on container each on is running on a single node.
The nodes are bare-metal nodes inside my company network.
keycloak uses TCPPING as discovery protocol.
Since the two containers are running on different nodes, and each instance is pining inside docker default network they are not able to find each other.
I said docker default network because I didn’t specify special network for the two containers.
Any idea how can I make the two instances in this architectural design discover each others!
and I was thinking about docker swarm as a solution.
Assuming the two nodes are on the same network and are able to connect to each other, you can get the two container to discover each other using docker host networking
It would be as easy as docker run --net=host
Docker host networking makes the container to use the networking of the host node and thus will be allocated an IP address by the DHCP server used by the host node and for all practical purposes , would look like another host in that network.
This allows the two containers to discover each other using TCPPING
Docker swarm would also enable this .Docker swarm basically abstracts multiple host nodes such that you can containers on them as if you are running docker on single host. But that will require docker-machine and whole new setup.
Okay so in Vagrant/VVV you can assign different hostnames to your different projects so when you go to http://myproject-1.dev your website shows up.
This is very convenient if you are working on dozens of projects at the same time, As far as I know such thing is not possible in docker (it can't touch hosts file), My question is, is there something similar we can do in Docker? Some automated tool maybe?
Using docker for windows.
Hostnames can map many containers together. In docker compose, there's a hostname option. But that's only within the Docker network bridge, not available to the host
Docker isn't a VM (although it runs within one in Windows).
You can edit your hosts file to have the HyperVisor available, but you're supposed to have the host ports forwarded into the container.
Use localhost, not any hostname.
If you prefer your Vagrant patterns, continue using it, but provision Docker containers from it, or use Docker Machine