Connecting Multiple Docker containers to host - docker

TL;DR: I just want a way to forward trafic to localhost to the host without using --net=host
I'm running multiple containers on the same host, and need them to access an instance of Redis that's available at localhost:6379. Also, I need to use port forwarding, so using --net=host is not an option.
How can I start multiple containers and allow all of them to forward trafic to localhost to the host?
I have also tried docker run --add-host localhost:<private ip address> -p <somehostport>:<somecontainerport> my_image, with no success (I still get that connection to 127.0.0.1:6379 is refused, as if localhost was not resolved to the host's private IP)

I'm running multiple containers on the same host, and need them to access an instance of Redis that's available at localhost:6379.
You can't.
If something is listening only on localhost, then you can't connect to it from another computer, from a virtual machine, or from a container. However, if your service is listening to any other address on your host, so you can simply point your containers at that address.
One solution is to configure your Redis service to listen on the address of the docker0 bridge, and then point your containers at that address.

This is better solved by a small redesign. Move redis into a container. Connect containers via container networking, and publish the redis port to localhost for anything that still isn't in a container. E.g.
docker network create redis
docker run -d --net redis -p 127.0.0.1:6379:6379 --name redis redis
docker run -d --net redis -e REDIS_URL=redis:6379 your_app
Containers need to communicate by the container name over a user created network, so your app will need to be configured with the new redis URL (changing localhost to redis).
The only other solution I've seen for this involves hacking of iptables rules, which isn't very stable when containers get redeployed.

Related

Why is it possible to connect to the redis docker container without port forwarding?

When I run: docker run --rm -it redis, The container receives ip: 172.18.0.2. Then from the host I connect to the container with the following command: redis-cli -h 172.18.0.2, and it connects normally, everything works, the keys are added. Why does this happen without port forwarding? Default docker network - bridge
docker run --rm -it redis will not expose the port. Try stop the redis container. Then run redis-cli -h 172.18.0.2 to check if another redis exists.
It is only possible because you're on native Linux, and the way Docker networking is implemented, it happens to be possible to directly connect to the container-private IP addresses from outside Docker.
This doesn't work in a wide variety of common situations (on MacOS or Windows hosts; if Docker is actually running in a VM; if you're making the call from a different host) and the IP address you get can change if the container is recreated. As such it's not usually a best practice to look up the container-private IP address. Use docker run -p to publish a port, and connect to that published port and the host's IP address.
It's because the redis docker file exposes the right port for the api which is 6379.

How to access a Process running on docker on a host from a remote host

How to access or connect to a process running on docker on host A from a remote host B
consider a Host A with ip 192.168.0.3 which is running a application on docker on port 3999 .
If i want to access that application from remote machine with IP 192.168.0.4 in same subnet.
To be precise i am running Kafka producer on the server and i am trying to receive using Kafka-console-Consumer.
Use --net=host to run your container and it'll use the host's network stack, then you can connect to the application running inside container like it's running on host directly.
Port mapping, use option -p to map the port inside your container to a port of your host. e.g. docker run -d -p <container port>:<host port> <image>, then you can connect to <host>:<host port> to connect your application inside container
Docker's built-in multi-host network. In early releases the network driver is isolated from docker's core, you have to use 3rd party tools like flannel or weave for multi-host connection, but from release 1.9, it has been merged into docker. You can follow it's guide to set it up.
Hope this is helpful :-)
First you need to bind docker container's port to the Host A:
docker run -d -p 3999:3999 kafka-producer
Then you need to access Host A from Host B using IP:Port
192.168.0.3:3999

docker networking: How to know which ip will be assigned?

Trying to get acquainted with docker, so bear with me...
If I create a database container (psql) with port 5432 exposed, and then create another webapp which wants to connect on 5432, they get assigned some ip addresses on the bridge network from docker...
probably 172.0.0.1 and 172.0.0.2 respectively. if I fire up the containers, inspect their ips with docker network inspect <bridge id>
if I then take those ips and plug in the port on my webapp settings, everything works great...
BUT I shouldn't have to run my webapp, shell into it, change settings, and then run a server, I should be able to just run the container...
So what am I missing here, is there a way to have these two containers networked without having to do all of that?
Use a Docker network
docker create network myapp
docker run --network myapp --name db [first-container...]
docker run --network myapp --name webapp [second-container...]
# ... and so on
Now you can refer to containers by their names, from within other containers. Just like they were hostnames in DNS.
In the application running in the webapp container, you can configure the database server using db as if it is a hostname.

port linking from docker container to host

I have the following situation. I have a service that listens to 127.0.0.1 on port 1234 (This cannot be changed for security reasons). On the same machine run a docker container. I need to somehow connect to the service on the host from within the container. Because the service only accepts requests from 127.0.0.1, i need somehow to link the port from the container to the host port but in reverse so when i connect from within the container to 127.0.0.1:1234 the service on the host will receive the data. Is this possible?
Thanks.
With the default bridged network, you won't be able to connect from the container to a service on the host listening on 127.0.0.1. But you can use --net=host when running a container to use the host network stack directly in the container. It removes some of the isolation, but then allows you to talk directly to 127.0.0.1 as the container and talk to services running on the host.
Question
How to bind Dockerized service on localhost:port ?
Answer
Use the -p as this: docker run -p 127.0.0.1:1234:1234 <other options> <image> <command>.

Access host port as localhost from docker

I have two machines connected via SSH tunnelling such that machine1:2222 can access machine2:2222 as localhost. machine2 runs the container docker2 and exposes services on port 2222 to localhost only. I can access these from machine1 on port 2222.
I would like to be able to access machine1:2222 from docker1, a container running on machine1 as localhost. I can determine the gateway IP address from within docker1, however connections are rejected because they come from the IP address assigned to docker1 rather than localhost.
So, what is the best way to access services on machine2 from docker1 on machine1? Solutions I've seen seem to involve modifying iptables on the host machine which doesn't seem all that portable.
This is what the --net flag is for:
user#machine1:/ docker run --net="host" -ti docker1 /bin/bash
root#machine1:/ wget localhost:2222
>> (this will download whatever a request to machine2:2222 provides)

Resources