Docker Mac network_mode host and kubernetes using kind [closed] - docker

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
Improve this question
Saw on a few issues (such as https://github.com/docker/for-mac/issues/2716) that network_mode: host is not supported on Mac.
However, when I was trying to run a Kubernetes cluster using kind, and run Spark on K8s, using the Spark master address as k8s://127.0.0.1:59369 which is the same address and port as the K8s control plane, I get a Connection Refused error unless I am using network_mode: host. Error line:
Caused by: java.net.ConnectException: Failed to connect to /127.0.0.1:59369
This confuses me as I thought network_mode: host should have no effect on Mac, yet there is an effect, and it is blocking me from using bridge network and adding port mapping for another application in this container.
Any ideas how to resolve this?

Specifying host networking on Docker Desktop setups does something; the problem is that the "host" network it specifies isn't actually the "host" you'd intuitively expect as someone typing on the keyboard.
Docker Desktop on MacOS launches a hidden Linux VM, and the containers run inside that VM. When you run docker run -p, Docker Desktop is able to forward that port from the VM to the host, so port mappings work normally. If you use docker run --net=host, though, you get access to the VM's host network; since this generally disables Docker's networking layer, Docker isn't able to discover what the container might be doing, and it can't forward anything from the actual host into the VM to the container. That's the way host networking "doesn't work".
In practice I see host networking suggested for four things:
Processes with unpredictable or a very large number of port mappings, where docker run -p can't be used
To actually manage the host's network environment, in spite of running in a container
To access non-container processes on the same host, where host.docker.internal would also work
To access the published ports of other containers, without setting up Docker networking
I think here's you're running into this last case. Kind is publishing a port for the Kubernetes API service, so port 59369 is accessible on both the actual host and in the Linux VM. Now if your Spark container activates host networking it is using the VM's host network, but the other container's published port is still accessible there, which is why the http://localhost:59369 URL still works.

Related

For Docker Netrworking: Why (what scenario(s)) would you not use just "--network host" for "Host" mode networking?

This is a followup to an earlier question that I had asked, "https://stackoverflow.com/questions/72046646/does-docker-persist-the-resolv-conf-from-the-physical-etc-resolv-conf-in-the-co".
I've been testing with containers on 2 different machines, and using "--network host" and from that earlier thread in that case it is using a default "Host" mode network named "host"(?).
Since with "host" mode networking, the container and the app inside the container are basically on the same IP as the physical host where the container is running, under what (example) scenarios would you actually want to create a named "host" mode network and then have container use that named "host" mode network?
What would the advantages/differences be between using the custom/named "host" mode network vs. just using "--network host"?
It seems like both situations (using "--network host" vs. "create network xyz" where xyz is a named host network, and then doing the container "docker run --network xyz" would functionally be the same?
Sorry for the newbie question :( and thanks again in advance.
Jim
I don't think you can create a host-mode named network, and if you did, there'd be no reason to use it. If you need host networking – and you almost certainly don't – use docker run --net host or Compose network_mode: host.
But really, you don't need host networking.
With standard Docker networking, you can use docker run -p to publish individual ports out to the host. You get a choice to not publish a given port, and can remap the port. This also means that if, for example, you're running three services each with their own PostgreSQL server, there's no conflict over the single port 5432.
The cases where you actually need it are pretty limited. If an application listens on a very large number of ports or it doesn't listen on a predictable port then the docker run -p mechanism doesn't work well. If it needs to actively manage the host network then it needs to be given access to it (and it might be better run outside a container). If you've hard-coded localhost in your application, then in Docker your database isn't usually there (configuration via environment variables would be better).

Connect a docker's container port that is running inside an Ubuntu VM, with the VM's host machine network [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have a Docker running inside an Ubuntu-18.04 image (VMWare Player) which is hosted on a Windows PC. Docker has a container for Gitlab which I can access through localhost:4000 from my VM. The question is how can I access the very same Gitlab instance from my Windows PC? From my understanding there are two layers I need to connect. The first is the Docker with the VM host and the second is the VM host with the Windows host. I've tried creating a bridged connection between The Windows Host and the VM but I couldn't make it work. Please provide a detailed answer with steps if possible.
OK problem solved thanks to PanosMR.
The solution for me was to set VM network as host-only. Then assign an Sub-net IP to the VM like 192.168.42.0 with a mask like 255.255.255.0.
After that I went to see which IP my VM was assigned to. The IP was 192.128.42.128. Then on docker inside my Ubuntu VM I had set the Gitlab container --publish IP at the very same VM's IP plus the port.
For example --publish 192.168.42.128:4000:80 and boom! When Gitlab container initiated I had access through my Windows PC on that IP.
That was the simplest solution I've ever saw and also the only legit.
If I remember well Virtualbox has a settings screen to configure port forward. Search google around that.

How can a container connect to a service on the docker host? [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 3 years ago.
There were two related questions but they didn't quite answer the question. But if mods think this is a duplicate, please let me know.
I have a docker-compose.yml file that deploys phpmyadmin. There is a mysql server hosted locally on the host proper (NOT as a container).
I have a config file for phpmyadmin to connect to my database. I can't seem to find a domain name for docker host so I've been taking the subnet that the containers deploy on and using the .1 of the subnet. For example, initially containers deployed to 172.16.0.0/24 and so I declared in phpmyadmin's configuration to connect to 172.16.0.1
This question was born out of the fact that every time I re-deployed, i.e. issued docker-compose down && docker-compose up -d the network address kept changing. My work around is to explicitly declare the default ipam network subnet, which is a fine workaround and actually preferred because I can then pin mysql user logins to the ip address range.
But given that docker knows to resolve the container service "phpmyadmin" to the container's IP address, I figured there MUST be something for the host so I wouldn't have to re-declare the IP address each time.
So, is there a "hostname" that a container can use to talk to the host or am I stuck using IP addresses?
Edit: I'm using docker on Linux and would very much prefer to not run the container in host mode.
The simplest solution is to set network_mode: "host" in your compose file for the container. See https://docs.docker.com/compose/compose-file/#network_mode.
Alternatively you can use host.docker.internal as the url name from within the container, if using Docker for Windows or Docker for Mac.

Apache Kafka in docker AND VirtualBox VM

I'm trying to use Apache Kafka, i.e. a version usable in connection with docker (wurstmeister/kafka-docker), in a Virtual Machine and connect to the kafka broker in the dockers over the host system of the VM.
I will describe the setup in more detail:
My host system is an Ubuntu 64 bit 14.04.3 LTS (Kernel 3.13) running on an "usual" computer. I have a complete and complex structure of various docker-containers interacting with each other. In order to not disturb, or better said, to encapsulate this whole structure, it is not an option to run the docker images directly on the host system. Another reason for that is the need for different python libraries on the host, which interfere with the python-lib version required by docker-compose (which is used to start the different docker images).
Therefore the desired solution should be to setup a virtual machine over VirtualBox (guest system: Ubuntu 16.04.1 LTS) and to completely run the docker environment in this VM. This has the distinct advantage that the VM itself can be configured exactly according to the requirements of the docker structure.
As mentioned above one of the docker images provides kafka and zookeeper functionality to use for communication and messaging. That means the .yaml file, which sets up a container running this image, forwards all necessary ports of kafka and zookeeper to the host system of the docker environment (which is the guest system of the virtual box VM). In order to also make the docker environment visible in the host system, I forwarded all the ports over network settings in VirtualBox (network -> adapter NAT -> advanced -> port forwarding). The behaviour of the system is as follows:
When I run the docker-environment (including kafka) I can connect, consume and produce from the VM-kafka, which uses the recommended standard kafka shell scripts (producer & consumer api).
When I run a Kafka and zookeeper server ON THE VM guest system, I can connect from outside the VM (host), produce and consume over producer & consumer api.
When I run Kafka in the docker environment I can connect from the host system outside the VM, meaning I can see all topics, get infos about topics, while also seeing some debug output from kafka and zookeeper in the docker.
What is not possible is unfortunately to produce or consume any message from the host system to/from the docker-kafka. From the producer API I get a "Batch expired" exception, the consumer returns with a "ClosedChannelException".
I have done a lot of googling and found a lot of hints how to solve similiar problems. Most of them refer to the advertised.host.name-parameter in kafka-server.properties, which is accessible over KAFKA_ADVERTISED_HOST_NAME in .yaml. https://stackoverflow.com/a/35374057/3770602 e.g. refers to that parameter when both the errors mentioned above occur. Unfortunately none of the scenarios feature both docker AND VM functionality.
Further more trying to modify this parameter does not have an affect at all. Although I am not very familiar with docker and kafka, I can see a problem here, as the kafka consumer and producer would get the local IP of the docker environment, which is 10.0.2.15 in the NAT case, to use as broker. But unfortunately this IP is not visibile from outside the VM. Consequently the solution would probably be a changed setup, where one should use the bridged mode in VirtualBox networking. Strange thing here is, that a bridged connection (of course) leads to an own IP of the VM over DHCP, which then leads to a non-accessible docker-kafka both from the VM AND the host system. This last behaviour seems to be very awkward to me.
Concluding my question would be, if somebody has some experiences with this or a similiar scenario and can tell me, how to configure docker-kafka, VirtualBox VM and host system. I have really tried a lot of different settings for the consumer and producer call with no success. Of course some docker, kafka or both docker & kafka experts are welcome to answer as well.
If you need additional information or you can provide some hints, please let me know.
Thanks in advance

Cross container communication with Docker

An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/

Resources