How to connect to docker host? - docker

I'm a bit confused about "what" a docker host is and how is it different from my system in itself.
I did the following,
docker run jenkins
/* in a new tab */
docker ps // to get the container id
docker inspect {container-id} // to get the IP
From what I understand the only way I can connect to the container's IP is from within the docker host (if I don't port map that is) - so how do I connect to the host?
I know I can bash into the container and curl the IP I got from inspect, but that's not the same as connecting to the docker host, is it?

The way you are using the term "docker host" here makes it sound like you are using the term to refer to the container itself. (You might also be referring to the physical machine which the container is running on).
You can think of the container as basically a very lightweight VM -- it has its own filesystem, network, possibly CPU and RAM resources, etc. So, without configuring the network, the container will be isolated. This analogy isn't perfect for any number of reasons, but its pretty close to what is going on.
Put another way, without port mapping (or "host networking", see this page for more details about docker networking), you can, as you discovered, only access the network within the container unless you map the ports (or, perhaps, are inside of a different container which is connected to the same bridge network).
In this case, you are probably just best off mapping the port so that you can access the service running inside the container.

Related

Open TCP connection to specific node in docker swarm

Question:
How can I access specific containers inside a docker swarm network from outside the network?
I don't need to access arbitrary ports, the exposed container ports are fine, but I need to be able to connect to a specific container, not just any container I am routed to via load balancing.
As in, I can currently do:
curl localhost:8582/service_id
And get something like:
1589697532253.0.8570331623512102
But the result varies, because it is load balanced to a different container each time I make the request. I only need this for debugging, I usually want the load balancing behavior, but when there is an issue with a specific container it is essential that I make requests only to that container.
I can do it within a container inside the network, but it is a lot easier to debug from my local machine, instead of inside a container.
Environment:
I am not sure if it is relevant, but I am on windows, running docker desktop, engine v19.03.8.
Things I tried:
I tried tunneling into the docker network with wireguard, however I believe that is a non-starter because my host OS is windows, and I can't find any wireguard images that support non-linux host OSes (and I'm not sure that is even technically possible).
When I run docker network inspect ingress -v I can see there appears to be IPs associated with each container (10.0.0.12, 10.0.0.13) which differ from the IPs on the overlay network (10.0.18.7, 10.0.18.8), but when I try to access my exposed port over any of those IPs, the connection attempt is ignored and does not connect.
I tried adding a specific network route to make sure the packets were going to docker, by forcing all packets in the /24 address range to go through the docker gateway, but that didn't work either (route add -p 10.0.0.0 MASK 255.255.255.0 192.168.8.177 METRIC 1 IF 49).
Any suggestions would be greatly appreciated!

docker container networking to connect localhost in one container to another

I am using the default bridge network for docker (and yes, I am relatively new to docker). I have two docker containers.
The first container provides a service on port 12345. When creating this container, I did not specify the --publish option because I did not want to expose this port to the outside world.
The second container needs to use the service from the first container. However, the application running in this second container was hardcoded to access the service at 127.0.0.1:12345. Clearly, the second container's localhost is not the same as the first container. Is there a way to course docker networking to think that localhost in the second container should actually be connected to the port in the first container, without exposing anything to the outside world?
Option N: (this works but may not be the best solution)
One way you can force this to behave the way you need is through injecting an additional service to bind to the port within on the application container and redirecting it outward.
socat TCP-LISTEN:12345,fork TCP:172.18.0.2:12345
A quick test here, I was able to confirm 127.0.0.1:12345 is treated as the remote 12345
Things to consider:
The two containers needs to be able to reach each other
It breaks the recommendation of one service per container.
Getting the app into the docker container. (yum / apt-get install socat, source build = ?)
Getting it to run on startup on container start/restart.

Docker containers that are not running on localhost

For regular docker containers (say the hello world example), after you run it, it is accessible thought localhost, where you can make a request it through your browser.
But sometimes it seems to access a container you need a special IP address. I'm wondering what's this behavior of docker container networking called and where is it defined/documented.
Let's say my local ip address is 10.0.75.1 (got from Network properties in Windows settings named, vEthernet (DockerNAT)). But in order to connect to a container running I had to use ip address 10.0.75.2. Why is this?
If try to inspect existing docker networks using docker network [cmd], the containers seem to be on different subnets, for example '172.17.0.0/16'

Can't resolve set hostname from another docker container in same network

I've had db and server container, both running in the same network. Can ping db host by its container id with no problem.
When I set a hostname for db container manually (-h myname), it had an effect ($ hostname returns set host), but I can't ping that hostname from another container in the same network. Container id still pingable.
Although it works with no problem in docker compose.
What am I missing?
Hostname is not used by docker's built in DNS service. It's a counterintuitive exception, but since hostnames can change outside of docker's control, it makes some sense. Docker's DNS will resolve:
the container id
container name
any network aliases you define for the container on that network
The easiest of these options is the last one which is automatically configured when running containers with a compose file. The service name itself is a network alias. This lets you scale and perform rolling updates without reconfiguring other containers.
You need to be on a user created network, not something like the default bridge which has DNS disabled. This is done by default when running containers with a compose file.
Avoid using links since they are deprecated. And I'd only recommend adding host entries for external static hosts that are not in any DNS, for container to container, or access to other hosts outside of docker, DNS is preferred.
I've found out, that problem can be solved without network using --add-host option. Container's IP can be gain using inspect command.
But when containers in the same network, they are able to access each other via it names.
As stated in the docker docs, if you start containers on the default bridge network, adding -h myname will add this information to
/etc/hosts
/etc/resolv.conf
and the bash prompt
of the container just started.
However, this will not have any effect to other independent containers. (You could use --link to add this information to /etc/hosts of other containers. However, --link is deprecated.)
On the other hand, when you create a user-defined bridge network, docker provides an embedded DNS server to make name lookups between containers on that network possible, see Embedded DNS server in user-defined networks. Name resolution takes the container names defined with --name. (You
will not find another container by using its --hostname value.)
The reason, why it works with docker-compose is, that docker-compose creates a custom network for you and automatically names the containers.
The situation seems to be a bit different, when you don't specify a name for the container yourself. The run reference says
If you do not assign a container name with the --name option, then the daemon generates a random string name for you. [...] If you specify a name, you can use it when referencing the container within a Docker network.
In agreement with your findings, this should be read as: If you don't specify a custom --name, you cannot use the auto-generated name to look up other containers on the same network.

Easy, straightforward, robust way to make host port available to Docker container?

It is really easy to mount directories into a docker container. How can I just as easily "mount a port into" a docker container?
Example:
I have a MySQL server running on my local machine. To connect to it from a docker container I can mount the mysql.sock socket file into the container. But let's say for some reason (like intending to run a MySQL slave instance) I cannot use mysql.sock to connect and need to use TCP.
How can I accomplish this most easily?
Things to consider:
I may be running Docker natively if I'm using Linux, but I may also be running it in a VM if I'm on Mac or Windows, through Docker Machine or Docker for Mac/Windows (Beta). The answer should handle both scenarios seamlessly, without me as the user having to decide which solution is right depending on my specific Docker setup.
Simply assigning the container to the host network is often not an option, so that's unfortunately not a proper solution.
Potential solution directions:
1) I understand that setting up proper local DNS and making the Docker container (network) talk to it might be a proper, robust solution. If there is such a DNS service that can be set up with 1, max 2 commands and then "just work", that might be something.
2) Essentially what's needed here is that something will listen on a port inside the container and like a sort of proxy route traffic between the TCP/IP participants. There's been discussion on this closed Docker GH issue that shows some ip route command-line magic, but that's a bit too much of a requirement for many people, myself included. But if there was something akin to this that was fully automated while understanding Docker and, again, possible to get up and running with 1-2 commands, that'd be an acceptable solution.
I think you can run your container with --net=host option. In this case container will bind to the host's network and will be able to access all the ports on your local machine.

Resources