Cannot Connect from Kafkacat running in docker to Kafka broker running locally on windows machine - docker

I am running kafka locally on a windows machine. The way I am running kafka is using
.\bin\windows\kafka-server-start.bat .\config\server.properties
The server.properties are:
listeners=Listener_BOB://:29092,Listener_Kafkacat://127.0.0.1:9092
advertised.listeners=Listener_BOB://:29092,Listener_Kafkacat://127.0.0.1:9092
listener.security.protocol.map=Listener_BOB:PLAINTEXT,Listener_Kafkacat:PLAINTEXT
inter.broker.listener.name=Listener_BOB
I am running kafkacat using docker
docker run -it --network="host" --name="producer" confluentinc/cp-kafkacat bash
When I run
kafkacat -b host.docker.internal:9092 -C -t test
I get the error message:
% ERROR: Local: Broker transport failure: 127.0.0.1:9092/0: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT)
I understand that I can run Kafka in docker but I would like to know why I cannot connect to the broker and produce or consume messages. I tried different things and tried to understand what the listeners do but I couldn't wrap my head around why this doesn't work.
When I do
kafkacat -b host.docker.internal:9092 -t test -L
I get
1 brokers:
broker 0 at 127.0.0.1:9092
1 topics:
topic "test" with 1 partitions:
partition 0, leader 0, replicas: 0, isrs: 0
Maybe someone can explain what am I doing wrong or tell me why this is not possible.
I downloaded the latest version of Kafka: kafka-2.4
Machine is Windows 10
Docker for Windows running using Linux containers

You need to set host.docker.internal:9092 as an advertised listener. Not localhost / 127.0.0.1
That's the address that clients will be returned when trying to connect
When you do that, you should be able to see this
1 brokers:
broker 0 at host.docker.internal:9092
Otherwise, as you can tell, the bootstrap connection worked, but the advertised address is going to be 127.0.0.1 to reach that specific broker, which obviously doesn't work because that would be the kafkacat container itself
Note: --network="host" doesn't do what you expect outside of a Linux host machine, so you're best off removing it

Related

Dockerized Zabbix: Server Can't Connect to the Agents by IP

Problem:
I'm trying to config a fully containerized Zabbix version 6.0 monitoring system on Ubuntu 20.04 LTS using the Zabbix's Docker-Compose repo found HERE.
The command I used to raise the Zabbix server and also a Zabbix Agent is:
docker-compose -f docker-compose_v3_ubuntu_pgsql_latest.yaml --profile all up -d
Although the Agent rises in a broken state and shows a "red" status, when I change its' IP address FROM 127.0.0.1 TO 172.16.239.6 (default IP Docker-Compose assigns to it) the Zabbix Server can now successfully connect and monitoring is established. HOWEVER: the Zabbix Server cannot connect to any other Dockerized Zabbix Agents on REMOTE hosts which are raised with the docker run command:
docker run --add-host=zabbix-server:172.16.238.3 -p 10050:10050 -d --privileged --name DockerHost3-zabbix-agent -e ZBX_SERVER_HOST="zabbix-server" -e ZBX_PASSIVE_ALLOW="true" zabbix/zabbix-agent:ubuntu-6.0-latest
NOTE: I looked at other Stack groups to post this question, but Stackoverflow appeared to be the go-to group for these Docker/Zabbix issues having over 30 such questions.
Troubleshooting:
Comparative Analysis:
Agent Configuration:
Comparative analysis of the working ("green") Agent on the same host as the Zabbix Server with Agents on different hosts showing "red" statuses (not contactable by the Zabbix server) using the following command show the configurations have parity.
docker exec -u root -it (ID of agent container returned from "docker ps") bash
And then execute:
grep -Ev ^'(#|$)' /etc/zabbix/zabbix_agentd.conf
Ports:
The correct ports were showing as open on the "red" Agents as were open on the "green" agent running on the same host as the Zabbix Server from the output of the command:
ss -luntu
NOTE: This command was issued from the HOST, not the Docker container for the Agent.
Firewalling:
Review of the iptables rules from the HOST (not container) using the following command didn't reveal anything of concern:
iptables -nvx -L --line-numbers
But to exclude Firewalling, I nonetheless allowed everything in iptables in the FORWARD table on both the Zabbix server and an Agent in an "red" status used for testing.
I also allowed everything on the MikroTik GW router connecting the Zabbix Server to the different physical hosts running the Zabbix Agents.
Routing:
The Zabbix server can ping remote Agent interfaces proving there's a route to the Agents.
AppArmor:
I also stopped AppArmor to exclude it as being causal:
sudo systemctl stop apparmor
sudo systemctl status apparmor
Summary:
So everything is wide-open, the Zabbix Server can route to the Agents and the config of the "red" agents have parity with the config of the "green" Agent living on the same host at the Zabbix Server itself.
I've setup non-containerized Zabbix installation in production environments successfully so I'm otherwise familiar with Zabbix.
Why can't the containerized Zabbix Server connect to the containerized Zabbix Agents on different hosts?
Short Answer:
There was NOTHING wrong with the Zabbix config; this was a Docker-induced problem.
docker logs <hostname of Zabbix server> revealed that there appeared to be NAT'ing happening on the Zabbix SERVER, and indeed there was.
Docker was modifying iptables NAT table on the host running the Zabbix Server container causing the source address of the Zabbix Server to present as the IP of the physical host itself, not the Docker-Compose assigned IP address of 172.16.238.3.
Thus, the agent was not expecting this address and refused the connection. My experience of Dockerized apps is that they are mostly good at modifying IP tables to create the correct connectivity, but not in this particular case ;-).
I now reviewed the NAT table by executing the following command on the HOST (not container):
iptables -t nat -nvx -L --line-numbers
This revealed that Docker was being, erm "helpful" and NAT'ing the Zabbix server's traffic
I deleted the offending rules by their rule number:
iptables -t nat -D <chain> <rule #>
After which the Zabbix server's IP address was now presented correctly to the Agents who now accepted the connections and their statuses turned "green".
The problem is reproducible if you execute:
docker-compose -f docker-compose -f docker-compose_v3_ubuntu_pgsql_latest.yaml down
And then run the up command raising the containers again you'll see the offending iptables rule it restored to the NAT table of the host running the Zabbix Server's container breaking the connectivity with Agents.
Longer Answer:
Below are the steps required to identify and resolve the problem of the Zabbix server NAT'ing its' traffic out of the host's IP:
Identify If the HOST of the Zabbix Server container is NAT'ing:
We need to see how the IP of the Zabbix Server's container is presenting to the Agents, so we have to get the container ID for a Zabbix AGENT to review its' logs:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b2fcf38d601f zabbix/zabbix-agent:ubuntu-6.0-latest "/usr/bin/tini -- /u…" 5 hours ago Up 5 hours 0.0.0.0:10050->10050/tcp, :::10050->10050/tcp DockerHost3-zabbix-agent
Next, supply container ID for the Agent to the docker logs command:
docker logs b2fcf38d601f
Then Review the rejected IP address in the log output to determine if it's NOT the Zabbix Server's IP:
81:20220328:000320.589 failed to accept an incoming connection: connection from "NAT'ed IP" rejected, allowed hosts: "zabbix-server"
The fact that you can see this error proves that there is no routing or connectivity issues: the connection is going through, it's just being rejected by the application- NOT the firewall.
If NAT'ing proved, continue to next step
On Zabbix SERVER's Host:
The remediation happens on the Zabbix Server's Host itself, not the Agents. Which is good because we can fix the problem in one place versus many.
Execute below command on the Host running the Zabbix Server's container:
iptables -t nat -nvx -L --line-numbers
Output of command:
Chain POSTROUTING (policy ACCEPT 88551 packets, 6025269 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 MASQUERADE all -- * !br-abeaa5aad213 192.168.24.128/28 0.0.0.0/0
2 73786 4427208 MASQUERADE all -- * !br-05094e8a67c0 172.16.238.0/24 0.0.0.0/0
Chain DOCKER (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 RETURN all -- br-abeaa5aad213 * 0.0.0.0/0 0.0.0.0/0
2 95 5700 RETURN all -- br-05094e8a67c0 * 0.0.0.0/0 0.0.0.0/0
We can see the counters are incrementing for the "POSTROUTING" and "DOCKER" chains- both rule #2 in their respective chains.
These rules are clearly matching and have effect.
Delete the offending rules on the HOST of the Zabbix server container which is NATing its' traffic to the Agents:
sudo iptables -t nat -D POSTROUTING 2
sudo iptables -t nat -D DOCKER 2
Wait a few moments and the Agents should now go "green"- assuming there are no other configuration or firewalling issues. If the Agents remain "red" after applying the fix then please work through the troubleshooting steps I documented in the Question section.
Conclusion:
I've tested and restarting the Zabbix-server container does not recreate the deleted rules. But again, please note that a docker-compose down followed by a docker-compose up WILL recreate the deleted rules and break Agent connectivity.
Hope this saves other folks wasted cycles. I'm a both a Linux and network engineer and this hurt my head, so this would be near impossible to resolve if you're not a dab hand with networking.

Memcached in standalone Docker container time out and port error

I'm running a setup of 3 Ubuntu virtual machines. Two running the Python production code base and the other has a Memcached Docker container. On the Memcached machine I ran docker run -dit --name production-memcached --publish 11211:11211 memcached:latest.
The code base gets the following error message when trying to interact with it:
"exception": "TimeoutError(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', None, 10060, None)"
I have ran docker exec -it production-memcached memcached stats and get the error message below.
failed to listen on TCP port 11211: Address already in use
However I've ran netstat -plnt and get tcp6 0 0 :::11211 :::* LISTEN 35030/docker-proxy, which looks fine to me.
I was able to get this to work by opening port 80 and using iptables to forward incoming port 80 to port 11211 but would prefer to use the Memcached port number.
The Memcached client is created by the following line:
client = base.Client(("domain.co.uk", 11211))
Any help would be appreciated, thanks.
DigitalOcean doesn't allow in-traffic on port 11211 to its virtual machines. If you want a Memcached machine you'll also need a virtual machine to act a proxy between you and it. I hope this saves someone the headache it caused me!

Unable to run console consumer using a Kafka broker inside a Docker container

I'm having a problem running a Kafka broker in a Docker container.
I have downloaded and unpacked a tar archive of Kafka 2.12-2.4.1.
I can run Zookeeper and Kafka brokers from the command line, producing and consuming messages successfully. When I build a docker container with Zookeeper, I can run that with a Kafka broker running from the command line, again producing and consuming messages successfully. When I put the broker into a docker container, the consumers and producers can't find a broker to connect to.
I have checked to ensure that Zookeeper has a connected broker: the zookeeper "dump" command below
echo dump | nc localhost 2181
returns
SessionTracker dump:
Session Sets (3)/(1):
0 expire at Thu May 28 09:53:33 GMT 2020:
0 expire at Thu May 28 09:53:36 GMT 2020:
1 expire at Thu May 28 09:53:39 GMT 2020:
0x10002dfed700008
ephemeral nodes dump:
Sessions with Ephemerals (1):
0x10002dfed700008:
/controller
/brokers/ids/0
Connections dump:
Connections Sets (2)/(2):
0 expire at Thu May 28 09:53:37 GMT 2020:
2 expire at Thu May 28 09:53:47 GMT 2020:
ip: /172.18.0.1:55656 sessionId: 0x0
ip: /172.18.0.5:48474 sessionId: 0x10002dfed700008
which, to my untrained eye, looks like Zookeeper has a registered broker.
I can successfully list topics with the command
kafka-topics.sh --list --zookeeper localhost:2181
and when I run the command
netstat -plnt
I can see that I have listeners on ports 2181 and 9092, as below
tcp6 0 0 :::9092 :::* LISTEN 54661/docker-proxy
tcp6 0 0 :::2181 :::* LISTEN 47872/docker-proxy
However, when I try to connect a consumer with the command
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic aaaa
I get errors
WARN [Consumer clientId=consumer-console-consumer-8228-1, groupId=console-consumer-8228] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
which would seem to indicate that the client can't contact the broker.
I've tried several different variations on the listeners and advertised.listeners in the broker config file, setting them to localhost and 127.0.0.1, but to no avail.
I'm sure that I'm missing something simple: can anyone help?
Make sure to configure the following:
KAFKA_LISTENERS: LISTENER_INTERNAL://kafka-host:29092,LISTENER_EXTERNAL://kafka-host:9092
KAFKA_ADVERTISED_LISTENERS: LISTENER_INTERNAL://kafka-host:29092,LISTENER_EXTERNAL://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_INTERNAL:PLAINTEXT,LISTENER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_INTERNAL
Now all your clients that run within the Docker Network, should use
LISTENER_INTERNAL://kafka-host:29092
All clients outside the Docker Network should use
LISTENER_EXTERNAL://kafka-host:9092
For a more comprehensive guide, read #RobinMoffatt article.
So, I have solved the problem and I now have a working Docker/Kafka environment: I have a zookeeper, a broker and a consumer running in docker, and I can produce messages both from a producer running in docker and a consumer running from a command line.
As #GiorgosMyrianthous suspected, the problem was with the listeners and advertised listeners. I entered the following values in the Kafka broker properties file
listeners=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092
advertised.listeners=PLAINTEXT://server:29092,PLAINTEXT_HOST://localhost:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
and I started the container with port 9092 exposed to the outside world.
I ran the Docker consumer with the bootstrap server set to server:29092.
I ran the Docker producer with the bootstrap server set to server:29092.
I ran the command line producer with the bootstrap server set to localhost:9092.
Messages typed in either of the producers appeared in the consumer.

Docker tutorial, localhost:4000 is inaccessible

Following the tutorial on https://docs.docker.com/get-started/part2/.
I start my docker container with docker run -p 4000:80 friendlyhello
and see
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:8088/ (Press CTRL+C to quit)
But it's inaccessible from the expected path of localhost:4000.
$ curl http://localhost:4000/
curl: (7) Failed to connect to localhost port 4000: Connection refused
$ curl http://127.0.0.1:4000/
curl: (7) Failed to connect to 127.0.0.1 port 4000: Connection refused
Okay, so maybe it's not on my local host. Getting the container ID I retrieve the IP with
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 7e5bace5f69c
and it returns 172.17.0.2 but no luck! curl continues to give the same responses. I can confirm something is running on 4000....
lsof -i :4000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 94812 travis 18u IPv6 0x7516cbae76f408b5 0t0 TCP *:terabase (LISTEN)
I'm pulling my hair out on this. I've read through the troubleshooting guide and can confirm
* not on a proxy
* don't use a custom dns
* I'm having issues connecting to docker, not docker connecting to my pip server.
Running the app.py with python app.py the server starts and I'm able to hit it. What am I missing?
Did you accidentally put port=8088 at the bottom of your app.py file? When you are running this the last line of your output is saying that your python app is exposed on port 8088 not 80.
To confirm you can run either modify the app.py file and rebuild the image, or alternatively you could run: docker run -p 4000:8088 friendlyhello which would map your local port 4000 to 8088 in the container.
Try to run it using:
docker run -p 4000:8088 friendlyhello
As you can see from the logs, your app starts on port 8088, but you connect 4000 to 80 where on 80, nothing is actually listening.

failed: port is already allocated

I use Docker for running Oracle 11g Express on macOS Sierra 10.12.2
https://github.com/wnameless/docker-oracle-xe-11g
This is my error:
Last login: Sat Jan 7 22:42:11 on ttys000
➜ ~ docker run -d -p 49160:22 -p 49161:1521 wnameless/oracle-xe-11g
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.
➜ ~ docker run -d -p 49160:22 -p 49161:1521 wnameless/oracle-xe-11g
043d8caecbb45d6e2e5999b69a2f760c20d53ff3aa2fad78cb1eb70acb058a1f
docker: Error response from daemon: driver failed programming external connectivity on endpoint serene_lalande (08bb0bd9684c0f92db7b736986bf894d3a57a714324405823496d13e175e7491): Error starting userland proxy: Bind for 0.0.0.0:49161 failed: port is already allocated.
➜ ~
I diagnostic:
➜ ~ netstat -anp tcp | grep 49161
tcp4 0 0 192.168.1.2.49161 17.188.166.13.5223 ESTABLISHED
➜ ~
➜ ~ docker --version
Docker version 1.12.5, build 7392c3b
My Dianostic ID: 20EB9506-CC72-4093-8A15-60E05A841ED1
I don't know why. Before that few weeks, it run success. Nearly, I change, release new DHCP IP. How to run Docker instance has Oracle 11g express success?
you can't launch twice
docker run -d -p 49160:22
as this means you want to allocate the port 49160 on the host twice, of course, the second time, you get you error message, try for the second run
docker run -d -p 49161:22
You will need to use a different port instead of 49161. Try a port less than 49152.
You have a pre-existing connection between the the port 49161 on your computer and port 5223 on a remote Apple server. That port, therefore, cannot be used for anything else until that connection ceases to exist. Port 5223 is used for Apple's push notifications. As best as I can tell, your computer so happened to use the random port 49161 to connect to Apple's server this time. Previously when that Docker container worked, I would bet port 49161 on your computer was not then used.
Whenever you connect to a remote server, your own computer allocates a random port number for that connection. This time around, your computer allocated 49161 when it connected to Apple's push notifications service. Next time, it could be a completely different number. See https://en.wikipedia.org/wiki/Ephemeral_port

Resources