Multiple Docker host machine communication - docker

Suppose, I want to connect a container with another container, where both docker containers are running on a different machine. How do I do that? Hopefully, the attached picture will help to understand what I need. thanks.

This works exactly the same way as if neither process was running in Docker: connect to the other system's IP address and the port you published when you launched the container.
machine02$ docker run --name m2-c1 -p 12345:80 image1
machine01$ docker run --name m1-c5 \
> -e CONTAINER_1_URL=http://192.168.1.102:12345 \
> image5
If you find yourself doing this often, a clustered setup like Kubernetes or Docker Swarm is built for this sort of environment. They have a piece called an overlay network that would allow all 10 containers to share a single "network", so you can directly call c1 as a host name and reach either copy of it. A non-Docker service discovery system, like Hashicorp's Consul, can also help remember what service is running on which node.

Related

Communication from Docker-Container to outside

I am quite new to the docker topics and I have a question of connecting container services with traditional ones.
Currently I am thinking of replacing an traditional grafana installation (directly on a linux server) with a grafana docker container.
In grafana I have to connect to different data sources like a mysql instance, a Winsows SQL Database and so on. So grafana is doing a pull of data. All these data sources reside (and will still reside) on other hosts and they are not containers.
So how can I implement that my container is able to communicate with this data sources? Is it possible by default or do I have to implement a special kind of network? I saw that there is an option called macvlan...is that the correct way?
BR
Jan
This should work out of the box, as far as I understand. At least, I'm using Grafana inside a docker container and it works perfectly.
You can test a connectivity from inside your docker container to some external resource by opening a container shell like this:
docker exec -it <container ID> /bin/bash
And then
root#a9cbebfc4564:/# curl google.com
Or
root#a9cbebfc4564:/# ping <bla-bla>
Commands above depend on a docker image environment (like OS or installed software), but this can be solved in a same was as you can do on a regular Unix env
P.S. I encountered a docker2host connection issue once, but it was due to incorrect firewall configuration on a host side.
Since you are replacing a traditional installation, you can start with host networking. This mode give you same connectivity experience as installing on the host. A quick start is as simple as:
docker run --network host grafana/grafana
Notice there's no need to --publish or --publish-all ports as the Grafana container now share the host network.

Inspect Docker Containers on Host from Container on same Host

I'm in the process of containerizing a service which monitors other services, many of which are also running on containers. Right now, it runs docker inspect from a Python subprocess on the host to monitor other services' containers. How can I get similar information from within another container?
I've considered running the same commands on the host via ssh, but it seems like there should be a better way. I can't be the first person to want to have one container monitor others, but all I find on the web is 3rd party solutions, which seem overkill for the problem at hand.
You can mount the docker socket inside the container that contains your python program.
docker run -v /var/run/docker.sock:/var/run/docker.sock my-python-program

Unable to connect outside database from Docker container App

we have two machineā€¦one is windows machine and another in Linux machine. My application is running under Docker Container at Linux machine. our data base is running at Windows machine.our application need to get data from windows machine DB.
As we have given proper data source detail like IP, username ,password in our application. it works when we do not use docker container but when we use docker container it do not work.
Can anyone help me out to get this solution that how we can connect outside DB from Docker enabled application as we are totally new guys in term of Docker.
Any help would be much appreciated.
Container's default network is "bridge",you should choose macvlan or host network.
method 1
docker run -d --net host image
this container will share your host IP address and will be able to access your database.
method 2
Use docker network create command to create a macvlan network,refrence here
then create your container by
docker run -d --net YOURNETWORK image
The container will have an IP address which is the same gateway with its host.
There are a lot of issues that could be affecting your container's ability to communicate with your database. In the future you should compose your question with as much detail as possible. To correctly answer this you will, at a minimum, need to include the following details:
Linux distribution name & version
Docker version
Output of docker inspect from the container
Linux firewall configuration
Network configuration
Is your Windows machine running on the same local network / subnet as your Linux machine? If so, please provide information about the subnet, as the default bridge set up by Docker may restrict access to local resources, whereas those over a wide area network would still be accessible.
You can try passing the --network=host option to your docker run command like so: docker run --network=host <image name>. Doing so eliminates the need to specify port mappings in your run command, as they are ignored when using the host's network.
Please edit your question and include the above requested details to get a complete answer.

Rancher: Multiple hosts in the same physical machine

I'm getting in habit with rancher and docker and I'm now trying to figure out if it is possible to create multiple local custom hosts on the same physical machine. I'm running RancherOS in a local computer. Through the Rancher Web UI I'm able to create a local custom host and add containers to it.
When I try to add another local custom host copying the given command to the terminal (SSH into the rancher machine) it stars the process but nothing happen. The new host doesn't appear in the hosts list of the web interface and I don't receive any error from the terminal.
I couldn't get any useful information from the Rancher documentation about this possible issue.
I was wondering if it's not possible to have more than one custom virtual host on the same physical machine or if the command fails for some reason that I would like to know how to debug.
sudo docker run -e -d --privileged \
-v /var/run/docker.sock:/var/run/docker.sock rancher/agent:v0.8.2 \
http://192.168.1.150:8080/v1/projects/1a5/scripts/<registrationToken>
where registrationToken is replaced by the one provided by rancher.
There is nothing "virtual" about them. The agent talks to docker and manages one docker daemon, which is the entire machine. Running multiple does not make sense for a variety of reasons, such as when you type "docker run ..." on the machine, which agent is supposed to pick up that container? And they are not really isolated from each other regardless, because any of them can run privileged containers which can then do whatever they want that affects the others.
The only way to do what you're asking is to have actual virtual machines running on the physical machine, each with their own OS and docker daemon.
Another option might be to use linux containers to create separated environments, each having it's own ip address and running it's own docker daemon.

RabbitMQ cluster by docker-compose on different hosts and different projects

I have 3 projects, that deploys on different hosts. Every project have it's own RabbitMQ container. But I need to create cluster with this 3 hosts, using the same vhost, but different user/login pair.
I was tried Swarm and overlay networks, but swarm is aimed to run solo containers and with compose it doesn't work. Also, I was tried docker-compose bundle, but this is not work as expected :(
I assumed that it would work something like this:
1) On manager node I create overlay network
2) In every compose file I extend networks config for rabbitmq container with my overlay network.
3) They work as expected and I don't publish to Internet rabbitmq port.
Any idea, how can I do this?
Your approach is right, but Docker Compose doesn't work with Swarm Mode at the moment. Compose just runs docker commands, so you could script up what you want instead. For each project you'd have a script like this:
docker network create -d overlay app1-net
docker service create --network app1-net --name rabbit-app1 rabbitmq:3
docker service create --network app1-net --name app1 your-app-1-image
...
When you run all three scripts on the manager, you'll have three networks, each network will have its own RabbitMQ service (just 1 container by default, use --replicas to run more than one). Within the network other services can reach the message queue by the DNS name rabbit-appX. You don't need to publish any ports, so Rabbit is not accessible outside of the Docker network.

Resources