Few days ago I tried to configure Kafka Docker container with Docker Compose and port mapping and discover interesting behavior which I do not fully understand:
Kafka broker seems to connect to itself. Why ?
My set up is:
Ubuntu 14.04, Docker 1.13.1, Docker-Compose 1.5.2
Kafka 0.10 listens on port 9092, this port is exposed by container.
In Docker Compose I have port mapping from container port 9092 to local port 4005.
I configured host name of my Docker Host machine and local port from Compose in advertised.listeners (docker-host:4005) since broker should be visible from my company network.
With this set up when I try to send/fetch data to/from Kafka, all attempts end up with:
Topic metadata fetch included errors: {topic_name=LEADER_NOT_AVAILABLE}
After trying various combinations of ports and host names in advertised.listeners, I discovered that sole working combination is localhost:9092. Any attempt to change hostname or port led to the error mentioned above.
This made me think that Kafka tries to connect to address configured in advertised.listeners and this is somehow related to topic metadata.
So inside Docker container I did:
redirect traffic to "docker-host" to loopback
echo "127.0.0.1 $ADVERTISED_HOST" >> /etc/hosts
configure Kafka to listen on all interfaces and port (exact as advertised)
sed -r -i "s/#(listeners)=(.*)/\1=PLAINTEXT:\/\/0.0.0.0:4005/g" $KAFKA_HOME/config/server.properties
advertise "docker-host" and external port
sed -r -i "s/#(advertised.listeners)=(.*)/\1=PLAINTEXT:\/\/$ADVERTISED_HOST:4005/g" $KAFKA_HOME/config/server.properties
And now it works like a charm.
However I still do not understand:
Why Kafka broker might need to connect to itself via address configured in advertised.listeners ?
Is there a way to disable this or at least configure it to use address from 'listeners' property (with default Kafka port) ?
UPD
Worth to mention, following setup does not work: Kafka listens on 0.0.0.0:9092, advertised listener is configured to docker-host:4005.
In this case whenever consumer or producer connects to kafka it receives LEADER_NOT_AVAILABLE.
There is also connection shown by netstat (within container) to docker-host:4005 in state SYN_SENT.
UPD 2
Looks like there is similar problem with Kafka but inside AWS described here.
Difference is that in my case I want to use different Kafka port.
UPD 3
Ok, the reason why setup mentioned in the first UPD paragraph does not work is - UFW, for some reasons it blocks traffic which goes from docker container to itself via host machine.
Why Kafka broker might need to connect to itself via address
configured in advertised.listeners ?
When a Kafka broker is first connected by a client, it replies back with the address that it expects that client to use in the future to talk to the broker. This is what is set in the advertised.listeners property. If you don't set this property, the value from listeners will be used instead (which answers your second question).
So your "issue" is, that remote clients connect to yourhost:9092, reach the Kafka broker, because you forwarded the port, the broker then responds with "you can reach me at localhost:9092" and when the client sends the next packet there it just connects back to itself.
The metadata is not really related here, its just the first request that gets made.
Your solution is correct for this setup I think, have Kafka listen on local interfaces and set the advertised.listeners to the host that someone from your company network would connect to.
I don't 100% know if the broker needs to connect to itself as well, pretty sure thats not the case though. I think your setup would also work without the entry of the external hostname in your /etc/hosts file.
Is there a way to disable this or at least configure it to use address
from 'listeners' property (with default Kafka port) ?
see above
Related
I know that the title might be confusing so let me explain.
The is my current situation:
Server A - 127.0.0.1
Server B - 1.2.3.4.5
Server B opens a reverse tunnel to Server A. This gives me a random port on Server A to communicate with the Server B. Let's assume the port is 1337.
As I mentioned to access Server B I am sending packets to 127.0.0.1:1337.
Our client needs a Telnet connection. Since Telnet is insecure but a requirement, we decided to use telnet OVER the ssh reverse tunnel.
Moreover, we created an alpine container with busybox inside of it to eliminate any access to the host. And here is our problem.
The tunnel is created on the host, yet the telnet client is inside a docker container. Those are two separate systems.
I can share my host network with the docker with -network=host but it eliminates the encapsulation idea of the docker container.
Also binding the docker to host like that -p 127.0.0.1:1337:1337 screams that the port is already in use and it can't bind to that (duh ssh is using it)
Mapping ports from host to the container are also not working since the telnet client isn't forwarding the traffic to a specific port so we can't just "sniff" it out.
Does anyone have an idea how to overcome this?
I thought about sharing my host network and trying to configure iptables rules to limit the docker functionality over the network but my iptables skills aren't really great.
The port forward does not work, because that is basically the wrong direction. -p 127.0.0.1:1337:1337 means "take everything thats coming in on that host-port, and forward it into the container". But you want to connect from the container to that port on the host.
Thats basically three steps:
The following steps require atleast Docker v20.04
On the host: Bind your tunnel to the docker0 interface on the host (might require that you figure out the ip of that interface first). In other words, referring to your example, ensure that the local side of the tunnel does not end at 127.0.0.1:1337 but <ip of host interface docker0>:1337
On the host: Add --add-host host.docker.internal:host-gateway to your docker run command
Inside your container: telnet to host.docker.internal (magic DNS name) on the port you bound in step 2 (i.e. 1337)
I have a a docker container with a few images running there. I run them via docker-compose up command. On my device everything works well with localhost but I want to make so that other devices in the same network will be able to access the MQTT broker as well. How do I do that?
Currently, in my code I do this:
ws:localhost:9001
But since this localhost applies only for the device that runs docker, another laptop won't be able to use it. How do I solve that?
You use the LAN IP address of your machine (the one hosting the docker containers) in place of localhost.
We have no way of knowing what that address may be, but it could start with 192.168.x.x or may be 10.x.x.x
By default, Docker has a "bridge" network that will bridge your container to the outside world. Just use the IP address of the computer where your MQTT Broker Container is running, and port 9001, and it should work fine.
If you need to run it on an internal Docker network, you will have to use something like an ADC or TCP Proxy of some sort to allow access to it.
I have a client running in a docker container that subscribes to a MQTT broker and then writes the data into a database.
To connect to the MQTT Broker i will have to set up port forwarding.
While developing the client on my local machine the following worked fine:
SSH -L <mqtt-port:1883>:localhost:<9000> <user>#<ip-of-server-running-broker>
The client is then configured to subscribe to the MQTT broker via localhost:9000.
This all works fine on my local machine.
Within the container it wont, unless I run the container with --net=host but I'd rather not do that due to security concerns.
I tried the following:
Create docker network "testNetwork"
Run a ssh_tunnel container within "testNetwork" and implement port forwarding inside this container.
Run the database_client container within "testNetwork" and subscribe to the mqtt broker via the bridge network like ("ssh_tunnel.testNetwork:")
(I want 2 seperate containers for this because the ip address will have to be altered quite often and I don't want to re-build the client container all the time)
But all of my attempts have failed so far. The forwarding seems to work (I can access the shell on the server in the ssh container) but I haven't found a way to actually subscribe to the mqtt broker from within the client container.
Maybe this is actually quite simple and I just don't see how it works, but I've been stuck on this problem for hours by now...
Any help or hints are appreciated!
The solution was actually quite simple and works without using -net=host.
I needed to bind to 0.0.0.0 and use the Gateway Forwarding Option to allow remote hosts (the database client) to connect to the forwarded ports.
ssh -L -g *:<hostport>:localhost:<mqtt-port/remote port> <user>#<remote-ip>
Other containers within the same Docker bridge network can then simply use the connection string <name-of-ssh-container>:<hostport>.
the following is based on this quickstart guide here: http://docs.confluent.io/current/cp-docker-images/docs/quickstart.html
In there they demonstrate various kafka/confluent components in their own docker containers, each started with the
--net=host
flag and accessed via
localhost:port
No matter what I do, I am unable to access this from outside the ubuntu server itself, neither via ip nor domain, which they state should work e.g. for the control center.
But on the same host, my ubuntu box, everything works fine.
Any idea what the issue could be here? Stuck on this for hours already
Is the problem that you can't access the port (ie telnet not possible) or that you can't make the server works (ie no request are answered by the server)?
There is the -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \, this is the address kafka broker communicates to client for communication (and except on localhost, this obviously won't work...) after initial connexion. You can replace localhost with the ip of your server, it should work properly.
To be more specific, there is listeners config (default 0.0.0.0:9092) and advertised.listeners (default PLAINTEXT://locahost:9092)
The client initialize connexion on (bootstrap-server or broker-list) via the listener binding. Once this initial connexion is done, the broker will return all advertised.listeners from all brokers in cluster, and this is what is used for later exchanges)
From github repo, confluent assign to listener the value of advertised listener, changing the host to 0.0.0.0 (so it will be accessible from remote sure)
You can't set 0.0.0.0 to advertised.listeners though, it must be a unique reachable interface
Answering myself now, a stupidly simple firewall / docker thing:
careful when using UFW (firewall) and docker
see http://blog.viktorpetersson.com/post/101707677489/the-dangers-of-ufw-docker
I used a simple nodeJS hello world example on port 3000 and was able to connect to it from the outside as long as I used default/bridge networking and published the port (-p 3000:3000).
Using the host network I was not able to connect at all.
In both cases the firewall (UFW on ubuntu) did not explicitly allow port 3000 so Docker must be doing some hidden magic here of rewriting iptables without UFW noticing to allow bridged and published ports to go through.
=> solved by explicitly opening kafka ports in UFW
I have a server running inside a docker container, listening on UDP port, let's say 1234. This port is exposed in Dockerfile.
Also, I have an external server helping with NAT traversal, basically, just sending addresses of the registered server and a client to each other, and allowing to connect to a server by the name it sent during registration.
Now, if I run my container with -P option, my port is getting published as some random port, e.g. 32774. But on the helper server I see my server connected to it from port 1234, and so it can't send a correct address to a client. And a client can't connect at all.
If I run my container explicitly publishing my server on the same port with -p 1234:1234/udp, a client can connect to my server directly. But now on the helper server I see my server connected to it from port 1236, and again it can't send the correct port to a client.
How can this be resolved? My aim is to require as little addition configuration as possible from people who will use my docker image.
EDIT: So, I need either to know my external port number from inside the container to send it to the discovery server, which, as I understand, not possible at the moment, right? Or I need to make outgoing connections from the container and my port to use the same external port as configured for incoming connections - is that possible?
The ports are managed by docker and the docker network adaptor. When using solely -P then the port is exposed docker internally and accessible through docker linking. When using "1234:1234" then the port is mapped on a host port and directly available for a client and also available for linking.
Start the helper server with a link option "--link server container/name". The helper server will connect to host "server" on port 1234. The correct ip address will be managed by docker.
Enable docker to change your iptables configuration, which is docker default. Afterwards the client should be able to connect to both instances. Note that the helper server should provide the host ip and not the docker container ip address. The docker container ip address does only work inside the host where the docker network adapter is running.