Access host port as localhost from docker - docker

I have two machines connected via SSH tunnelling such that machine1:2222 can access machine2:2222 as localhost. machine2 runs the container docker2 and exposes services on port 2222 to localhost only. I can access these from machine1 on port 2222.
I would like to be able to access machine1:2222 from docker1, a container running on machine1 as localhost. I can determine the gateway IP address from within docker1, however connections are rejected because they come from the IP address assigned to docker1 rather than localhost.
So, what is the best way to access services on machine2 from docker1 on machine1? Solutions I've seen seem to involve modifying iptables on the host machine which doesn't seem all that portable.

This is what the --net flag is for:
user#machine1:/ docker run --net="host" -ti docker1 /bin/bash
root#machine1:/ wget localhost:2222
>> (this will download whatever a request to machine2:2222 provides)

Related

Why is it possible to connect to the redis docker container without port forwarding?

When I run: docker run --rm -it redis, The container receives ip: 172.18.0.2. Then from the host I connect to the container with the following command: redis-cli -h 172.18.0.2, and it connects normally, everything works, the keys are added. Why does this happen without port forwarding? Default docker network - bridge
docker run --rm -it redis will not expose the port. Try stop the redis container. Then run redis-cli -h 172.18.0.2 to check if another redis exists.
It is only possible because you're on native Linux, and the way Docker networking is implemented, it happens to be possible to directly connect to the container-private IP addresses from outside Docker.
This doesn't work in a wide variety of common situations (on MacOS or Windows hosts; if Docker is actually running in a VM; if you're making the call from a different host) and the IP address you get can change if the container is recreated. As such it's not usually a best practice to look up the container-private IP address. Use docker run -p to publish a port, and connect to that published port and the host's IP address.
It's because the redis docker file exposes the right port for the api which is 6379.

How to communicate with a running Docker container in a Host X from another Host Y(not from a container in Host Y)

I am experimenting about Docker-networking, I had set up a scenario as below,
Installed docker in a host-X connected over a network (host-X IP: 60.0.0.28) and run a basic docker container of ubuntu-OS (Docker Container is connected to the default docker bridge network only i.e. 172.17.0.0/16 & 172.17.0.2 is container IP). Now trying to communicate that running container from another host-Y with in the same network (host-Y IP: 60.0.0.40) in which no docker is installed.
I had added basic route in host-Y like, "ip route add 172.17.0.0/16 via 60.0.0.28 dev ens3" .
From the container i am able to ping the Host-Y & in reverse case, i am only able to ping the docker gateway "172.17.0.1" from Host-Y but not able to reach the container.
There are a wide variety of situations where the Docker-internal IP addresses just aren't useful; calling from a different host is one of them. You should totally ignore those as an implementation detail.
If you take Docker out of the picture, and run the process directly on the host, this should be straightforward: from host Y, you can call the process on host X given its DNS name and the port the server is running on.
hostY$ curl http://hostX:12345/
If the process is actually running in a Docker container, you need to make sure you've started the container with a published port. This doesn't necessarily need to match the port the process is listening on.
hostX$ docker run -p 12345:12345 imagename
Once you've done this, the process can be reached via the host's DNS name or IP address, and the published port, the same way as with a non-container server.
In normal circumstances you should not need to think about the Docker-internal IP addresses; you do not need manual ip route-setup commands like you show, and you shouldn't docker inspect or docker run --ip to find or set this detail.
Let’s assume you want to start Dockerized nginx on host X.
You’d run:
docker run --detach -p 8080:80 nginx
Then you could access your nginx instance using http://60.0.0.28:8080.

Connecting Multiple Docker containers to host

TL;DR: I just want a way to forward trafic to localhost to the host without using --net=host
I'm running multiple containers on the same host, and need them to access an instance of Redis that's available at localhost:6379. Also, I need to use port forwarding, so using --net=host is not an option.
How can I start multiple containers and allow all of them to forward trafic to localhost to the host?
I have also tried docker run --add-host localhost:<private ip address> -p <somehostport>:<somecontainerport> my_image, with no success (I still get that connection to 127.0.0.1:6379 is refused, as if localhost was not resolved to the host's private IP)
I'm running multiple containers on the same host, and need them to access an instance of Redis that's available at localhost:6379.
You can't.
If something is listening only on localhost, then you can't connect to it from another computer, from a virtual machine, or from a container. However, if your service is listening to any other address on your host, so you can simply point your containers at that address.
One solution is to configure your Redis service to listen on the address of the docker0 bridge, and then point your containers at that address.
This is better solved by a small redesign. Move redis into a container. Connect containers via container networking, and publish the redis port to localhost for anything that still isn't in a container. E.g.
docker network create redis
docker run -d --net redis -p 127.0.0.1:6379:6379 --name redis redis
docker run -d --net redis -e REDIS_URL=redis:6379 your_app
Containers need to communicate by the container name over a user created network, so your app will need to be configured with the new redis URL (changing localhost to redis).
The only other solution I've seen for this involves hacking of iptables rules, which isn't very stable when containers get redeployed.

Rancher container taking over host IP

I have 2 IP addresses in my rancher host (centos): 1.1.1.1 and 2.2.2.2
1.1.1.1 is the IP address I want to use to access the rancher UI and SSH into the host.
I want to use 2.2.2.2 for accessing containers for an application. I have 2 containers, one nginx and one ssh. I configured the containers to use hostport 80 mapped to 2.2.2.2:80 and 22 to hostport 2.2.2.2:22.
I have also changed the default run command for the rancher container to listen on port 80 and 443 of IP 1.1.1.1
If I go to my browser and access 1.1.1.1 I see rancher as expected, and if I access 2.2.2.2 I see my container app as expected.
However, if I try accessing 1.1.1.1:22 I end up connecting to the container ssh, which should be only listening to 2.2.2.2:22.
Am I missing something here? Is this a configuration issue on the host or the container? Can the container get access to something that it shouldn't even be aware of?
UPDATE
Let me try to clarify the setup:
Rancher is running in a host with 2 IP addresses. When I run rancher, I execute the following command, so it becomes attached to the first IP address:
docker run -d --volumes-from rancher-data --restart=unless-stopped -p 1.1.1.1:80:80 -p 1.1.1.1:443:443 rancher/rancher
docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.1.7 --server https://rancher1.my.tld --token [token] --ca-checksum [checksum] --etcd --controlplane --worker
I have 4 containers configured in the rancher UI, which I want pointing to 2.2.2.2:22 and 2.2.2.2:80, 2.2.2.2:2222 and 2.2.2.2:8080
These are 2 environments for an application. 22 and 80 are nginx and ssh containers for the LIVE environment (sharing a data volume between them) and the same thing for 2222 and 8080, with these being for a the QA environment. I use the ssh container to upload contents to the nginx container through the shared data volume.
I don't see a problem with this configuration, except the fact that when I configure the ssh machine to use port 22, when I try connecting to the host ssh, I get connected to the container ssh.
UPDATE 2
Here is a screenshot from the port mapping settings in the container: https://snag.gy/idTjoV.jpg
Container port 22 mapped to IP 2.2.2.2:222
If I set that to 2.2.2.2:22, SSH to host stops working, and ssh connections are established to the container instead.

How to connect to a HTTP server in Docker container on a remote host?

I want to run a Web server (nginx-based) in a container on a machine on the local network, access it over this network. It's working fine on the local machine, but I can't figure out how to get into it from another machine.
I've tried:
sudo docker network create hyperdata-network
sudo docker network inspect hyperdata-network
gives me an IP of 172.18.0.2, which I can ping.
Next I tried to attach the nginx-based container (called hyperdata-static) to the network:
sudo docker run -itd --name hyperdata --network=hyperdata-network --hostname=hyperdata -p 80:80 hyperdata-static
but I can't see port 80, and the docs have got me hopelessly confused.
Ideally I'd also like to address the Web server by name.
Suggestions?
Check whether you local machine accessible from the remote host? (try pinging the ip address of the host machine from remote machine) If it is accessible, try hitting http://hostmachine'sIP:80/ ( you don't have to give port 80 explicitly as http default is port 80). If the machine is accessible and not the port 80, then look for possible firewall restrictions.

Resources