Unable to connect to Kafka from outside application - docker

I have two docker machines and I want to create a kafka cluster inside docker swarm. My docker-compose.yml looks like this:
version: '3.2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:latest
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://:9092,PLAINTEXT_HOST://:29092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
I followed this question: Unable to connect to Kafka run in container from Spring Boot app run outside container and I am trying to access kafka from outside using localhost:29092.
I have already create the topic mytesttopic inside kafka. The below python code:
from kafka import KafkaConsumer, SimpleProducer, TopicPartition, KafkaClient
def consume_from_topic():
try:
consumer = KafkaConsumer('mytesttopic',
group_id= None,
bootstrap_servers=['localhost:29092'],
auto_offset_reset='earliest')
for message in consumer:
#consumer.commit()
print ("%s:%d:%d: key=%s value=%s" % (message.topic, message.partition,
message.offset, message.key,
message.value))
except Exception as e:
print(e)
pass
if __name__ == '__main__':
consume_from_topic()
returns:
NoBrokersAvailable
Does anyone know what I am missing here?

Given you are running docker swarm on 2 other machines you won't be able to connect on localhost:29092 due to the fact that kafka will be exposed on port 29092 on your nodes of the docker swarm. Try connecting to kafka by using the hostname of one of your nodes + port 29092. You should be able to connect to kafka this way.
Please note that this will only work if you are running docker swarm with routing mesh, the routing mesh makes sure that each node accepts incoming requests on a published port for any service, no matter if it is running on the same hosts and makes sure the traffic reaches the actual host where your container is running.
If you have not yet setup routing mesh try connceting to the actual hostname where a kafka container is running (not recommended, but for testing purposes it works)
I hope this helps you!

Your listeners are the exact same.
You need to set PLAINTEXT_HOST://0.0.0.0:29092 to bind the listener to all interfaces

Related

accessing kafka running in docker-compose from other machines

I want to run kafka in a single node, single broker, in one of computers on our network and be able to access it from other machines. for example by running docker-compose on 192.168.0.36 I want to access it from 192.168.0.19.
since we can't use any Linux distribution I have to run kafka as a docker container on windows.
I know there are already a ton of questions and documents on this topic including this question and this example and also this blog post, but unfortunately none of them worked out for me.
this is the compose file I'm using right now:
version: '3.7'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- "2181:2181"
expose:
- "2181"
volumes:
- type: bind
source: "G:\\path\\to\\zookeeper"
target: /opt/zookeeper-3.4.6/data
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
expose:
- "9092"
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9093, OUTSIDE://192.168.0.36:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT, OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://:9093,OUTSIDE://:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_BROKER_ID: 1
KAFKA_LOG_DIRS: "/kafka"
volumes:
- type: bind
source: "G:\\path\\to\\logs"
target: /kafka/
things I tried for debugging the issue:
alraedy tried all the different configurations in mentioned questions
and blog posts.
I can access Kafka from 192.168.0.36 which is machine running docker-compose but not from
192.168.0.19 (NoBrokersAvailable error in kafka-python).
just to see if it's internal networking problem or not, I tried a similar docker-compose file running a falcon API using gunicorn and I can call the API from 192.168.0.19.
I also tried the windows telnet tool to see the 9092 port is
accessible from different machines, it's accessible from 0.36 but not
from 0.19.
tried using a custom network like this one
I'm testing the connection using python's kafka-python package. we have a multi-broker kafka running on our on-premise kubernetes cluster and it's working fine, so I don't think my testing scripts have any issues.
UPDATE
as OneCricketeer suggested, I tried this solution with different configurations like 0.0.0.0:9092=>127.0.0.1:9092 and 192.168.0.36:9092=>127.0.0.1:9092. also disabled firewall. still getting NoBrokersAvailable but at least I can access 0.36:9092 from other machine's telnet now.

Docker Setup - Networking between multiple containers

On my linux server, I am running 3 images -
A) Docker and Zookeeper with this docker-compose file -
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:2.11-2.0.0
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
This will open up the kafka broker to the host machine.
B) JupyterHub
docker run -v /notebooks:/notebooks -p 8000:8000 jupyterhub
C) Confluent Schema Registry (I have not tried it yet, but in my final setup I will have a schema registry container as well)
docker run confluentinc/cp-schema-registry
Both are starting up without any issues. But how do I open up jupyterhub container to kafka container and schema registry ports so that my python scripts can access the brokers.
I'm assuming you want to run your jupyter notebook container on demand whereas your zookeeper and kafka containers will always be running separately? You can create a docker network and join all the containers to this network. Then your containers will be able resolve each other by their names.
Create a network
Specify this network in compose file
When starting your other containers with docker run, use --network option.
If you run docker network ls then you can find the name of the network that Compose creates for you; it will be named something like directoryname_default. You can then launch the containers connected to that network,
docker run --net directoryname_default confluentinc/cp-schema-registry
If you can include these files in the same docker-compose.yml file then you won’t need to do anything special. In particular this probably makes sense for the Confluent schema registry, which you can consider a core part of the Kafka stack if you’re using Avro messages.
You can use the Docker Compose service name kafka as a host name here, but since you need to connect to the “inside” listener you’ll need to configure a non-default port 9093. (The Docker Compose expose: directive doesn’t do much and you can safely delete it.)

Cannot connect to kafka docker container from outside client (wurstmeister images)

There are so many answers for this question that I ended up being totally confused about how I can connect to Kafka docker container from an outside client.
I have created two docker machines, a manager and a worker with these commands:
docker-machine create manager
docker-machine create worker1
I have add these two nodes inside a docker swarm.
docker#manager:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
6bmovp3hr0j2w5irmexvvjgzq * manager Ready Active Leader 19.03.5
mtgbd9bg8d6q0lk9ycw10bxos worker1 Ready Active 19.03.5
docker-compose.yml:
version: '3.2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:latest
ports:
- target: 9094
published: 9094
protocol: tcp
mode: host
environment:
HOSTNAME_COMMAND: "hostname | awk -F'-' '{print $$2}'"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://_{HOSTNAME_COMMAND}:9094
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9094
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
volumes:
- /var/run/docker.sock:/var/run/docker.sock
From inside docker, everything works fine. I can create topics and then produce/consume messages.
I created a python script in order to consume messages from outside docker. The simple code is presented below:
from kafka import KafkaConsumer
import json
try:
print('Welcome to parse engine')
consumer = KafkaConsumer('streams-plaintext-input', bootstrap_servers='manager:9094')
for message in consumer:
print(message)
except Exception as e:
print(e)
# Logs the error appropriately.
pass
But the code is stack forever. The connection is not correct. Can anyone provide any help on how to setup a connection?
Since you are using docker-machine you have to either
Run your code also in a container (using kafka:9092)
Run your code within the VM OS (using vm-host-name:9094)
Add PLAINTEXT://localhost:9096 to the advertised listeners, expose 9096 from the VM to your host, then use localhost:9096 in your code (note: 9096 is some random port)
The gist is that clients must be able to connect to the bootstrap address and the advertised one that is being returned. If it cannot connect to the second, code will timeout.

Error Docker compose configure listener for kafka Service. No Broker available

I try to setup a docker-compose with Kafka # Wurstmeister.
SCENARIO:
I develop an Architecture of multiple microservices. In concrete: I have a spring boot app that sends JSON to my kafka broker. A Flask service consumes the data.
This works when running the whole think outside docker. I am also able to send data to the kafka Topic in docker.
CODE:
Flask:
KafkaHost = "kafka:9092"
def initkafka():
# connect to Kafka server and pass the topic we want to consume
consumer = KafkaConsumer("TEST",
group_id='view',
bootstrap_servers=[Constants.KafkaHost]
)
KafkaConsumer(auto_offset_reset='latest',
enable_auto_commit=False)
KafkaConsumer(value_deserializer=lambda m: json.loads(m.dedoce('utf-8')))
KafkaConsumer(consumer_timeout_ms=1000)
return consumer
Docker Compose:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- test-net
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
#KAFKA_ADVERTISED_HOST_NAME: 172.17.0.1
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.17.0.1:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "TEST:1:1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
networks:
- test-net
ERROR
Traceback (most recent call last):
File "run.py", line 1, in <module>
from controller import Controller
File "/app/controller/Controller.py", line 27, in <module>
consumer = KafkaConfig.initkafka()
File "/app/config/KafkaConfig.py", line 16, in initkafka
enable_auto_commit=False)
File "/usr/local/lib/python3.6/site-packages/kafka/consumer/group.py", line 324, in __init__
self._client = KafkaClient(metrics=self._metrics, **self.config)
File "/usr/local/lib/python3.6/site-packages/kafka/client_async.py", line 221, in __init__
self.config['api_version'] = self.check_version(timeout=check_timeout)
File "/usr/local/lib/python3.6/site-packages/kafka/client_async.py", line 826, in check_version
raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable
I think that it is a problem of the environment configuration. I've read the wurstmeister doc but I can't figure out what I need to setup to make my flask service find the kafka broker.
The logs say that kafka is up an running und the topic " TEST " gets is created.
Do I have to configure the listeners e.g. say with ip and port in my network will listen to kafka ? Because in the kafka docs advertised.listeners is described as
Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners it is not valid to advertise the 0.0.0.0 meta-address.
Unless I'm mistaken, KAFKA_ADVERTISED_LISTENERS needs to have the same value as the kafka host that you define in your flask client. Thus, if you are connecting to Kafka from inside a docker container, you should have KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092. If connecting from the host, it should be KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092.
Alternatively, you may omit the KAFKA_ADVERTISED_LISTENERS setting and define instead KAFKA_ADVERTISED_HOST_NAME: kafka.
So I've been gone through this mentioned by #cricket_007. It's a bit clearer now but I still struggle with getting the connetion.
As a recap for my scenario: I run all my services and the message broker in the same Docker Network. So there has no external connection to be made.
In this blog entry there is an example given:
KAFKA_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://localhost:9092
KAFKA_ADVERTISED_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_BOB:PLAINTEXT,LISTENER_FRED:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_BOB
I guess I know what this configuration means.
In my case I thought I have to change it this way:
KAFKA_LISTENERS: LISTENER_PY://kafka:9092
KAFKA_ADVERTISED_LISTENERS: LISTENER_PY://kafka:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_PY:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_PY
I guess KAFKA_INTER_BROKER_LISTENER_NAME is not needed because I only got one broker. But does the listener name ( LISTENER_PY ) depend on my Flask Service name or any other property? As far as I understand I can use "kafka" as ip because I run Kafka as a service named "kafka" in my docker-compose. I tried this configuration but still it doesn't work. I wonder how it works from within my spring service to connect as a producer without defining any listener configuration.

Kafka Spark Streaming : Broker may not be available [Docker]

I'm new to docker. I'm trying to run a spark streaming application using docker.
I have kafka and spark streaming application running separately in 2 containers.
My kafka service is up and running fine. I tested with $KAFKA_HOME/bin/kafka-console-producer.sh and $KAFKA_HOME/bin/kafka-console-consumer.sh. I'm able to receive messages.
But when I'm running my spark streaming application, it's showing:
[Consumer clientId=consumer-1, groupId=consumer-spark] Connection to node -1 could not be established. Broker may not be available.
So, I'm not able to consume messages.
kafka : docker-compose.yml
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_LISTENERS: PLAINTEXT://:9092
depends_on:
- zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Spark Streaming code:
val sparkConf = new SparkConf().setAppName("Twitter Ingest Data")
sparkConf.setIfMissing("spark.master", "local[2]")
val ssc = new StreamingContext(sparkConf, Seconds(2))
val kafkaTopics = "sentiment"
val kafkaBroker = "kafka:9092"
val topics : Set[String] = kafkaTopics.split(",").map(_.trim).toSet
val kafkaParams = Map[String,Object](
"bootstrap.servers" -> kafkaBroker,
"group.id" -> "consumer-spark",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer]
)
logger.info("Connecting to broker...")
logger.info(s"kafkaParams: $kafkaParams")
val tweetStream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams))
I'm not sure if I'm missing anything.
Any help would be highly appreciated!!
If you're new to Docker, I wouldn't recommend having Kafka or Spark being the first things you're trying it with. Besides, seems like you just copied the wurstmeister example one without reading the README about configuring it... (which I can tell because you don't need the build: . property because that container already exists on DockerHub)
Basically, Kafka is only available within your Docker network via this configuration
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
You will need to edit this to have the port forwarding work properly from outside of Docker Compose's default network, or you must run your Spark code within a container as well.
If the Spark code is not in a container, then pointing it at kafka:9092 won't work at all
Ref. Kafka listeners explained
And lots of previous questions with similar problems (the issue is not just Spark related)

Resources