I'm pretty new to Docker, and am trying to run a Kafka docker image that has a security protocol of plaintext. I know this security protocol exists and works, because I can get my container running with docker-compose up in a directory containing a compose file with the environment variables defined. However, I am having a hard time getting the image running via command line.
I am running this command in terminal:
docker run -e KAFKA_ZOOKEEPER_CONNECT='zookeeper:2181' -e KAFKA_LISTENERS='PLAINTEXT://:81543,PLAINTEXT_HOST://:33333' --name kafka sha256:c3b05sdaw30e711c09b925e52991cc0a9c0c163016fhd47ae39840f255f490b2
My kafka env variables in my compose file is:
environment:
KAFKA_LISTENERS: PLAINTEXT://:81543,PLAINTEXT_HOST://:33333
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:81543,PLAINTEXT_HOST://host.docker.internal:33333
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
The stack trace is:
java.lang.IllegalArgumentException: Error creating broker listeners from ''PLAINTEXT://:81543,PLAINTEXT_HOST://:33333'': No security protocol defined for listener 'PLAINTEXT
at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:274)
at kafka.server.KafkaConfig.$anonfun$listeners$1(KafkaConfig.scala:1680)
at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:1679)
at kafka.server.KafkaConfig.advertisedListeners(KafkaConfig.scala:1707)
at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1778)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1756)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1312)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:34)
at kafka.Kafka$.main(Kafka.scala:68)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.IllegalArgumentException: No security protocol defined for listener 'PLAINTEXT
at kafka.cluster.EndPoint$.$anonfun$createEndPoint$2(EndPoint.scala:48)
at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
at kafka.cluster.EndPoint$.securityProtocol$1(EndPoint.scala:48)
at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:53)
at kafka.utils.CoreUtils$.$anonfun$listenerListToEndPoints$6(CoreUtils.scala:271)
at scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:99)
at scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:86)
at scala.collection.mutable.ArraySeq.map(ArraySeq.scala:38)
at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:271)
... 9 more
Looks like I had to define all my config variables in the command like this:
https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
docker run --network=rmoff_kafka --rm --detach --name broker \
-p 9092:9092 \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:5.5.0
Related
I just started to explore MinIO and I'm trying to run the following command:
docker run \
-p 9000:9000 \
-p 9001:9001 \
-e "MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
quay.io/minio/minio server /data --console-address ":9001"
but it gives me the following output:
docker: Error response from daemon: failed to initialize logging driver: error creating logger: error creating loki logger: loki: option loki-url is invalid parse "https://<user_id>:<password>#logs-us-west1.grafana.net/loki/api/v1/push": net/url: invalid userinfo.
How to solve this issue?
The problem is that the default logging driver for my system is Loki.
You can either change Docker’s daemon.json file (located in /etc/docker on Linux) and put "log-driver": "local" or state explicitly that you want to use the local driver when you run minio image.
for docker-compose file, you can add
logging:
driver: local
to the service minio
Description
I run Windows 10 and docker desktop. I execute all commands in DOS prompt or within the docker image, and I use the docker desktop app to start/stop/remove images.
Problem
When I try to send a random message to kafka I get an error message:
ERROR Error when sending message to topic test with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1503 ms has passed since batch creation plus linger time
How kafka image is created:
docker run -d --name kafka -p 9092:9092 -p 7203:7203 -e
KAFKA_ZOOKEEPER_CONNECT=172.17.0.2:2181 -e
kafka_advertised_host_name=localhost -e
kafka_advertised_port=9092
--env kafka_broker_id=1
--env kafka_auto_create_topics_enable=true
--env KAFKA_LISTENERS=PLAINTEXT://:9092
--env kafka_advertised_listeners=PLAINTEXT://localhost:9092
--env request_timeout_ms=20000
artifactory.x.net/image
How a message is sent to kafka with a producer:
kafka-console-producer.sh -broker-list localhost:9092 -topic test
Question
What kind of fault can it be and how can I succeed to send a message too kafka? What is wrong?
I start a docker container to run a Kafka server with
docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=192.168.99.100 --env ADVERTISED_PORT=9092 spotify/kafka
I find the IP address of the Docker container. This is 172.17.0.2 and I can ping this address.
Now I want a producer that sends messages:
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='172.17.0.2:9092')
for i in range(100):
producer.send('foobar', b'hola')
producer.close()
However this gives:
kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Failed to update metadata after 60.0 secs.
How to solve this?
Had the same error but because my topic name wasn't right/set, same as python_noob.
I am trying to make a container for consul and it keeps failing with this output, funny, I don't really think it is an error
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
following is the command I am using:
docker container run --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -dns-port 53 -bootstrap-expect 1 -ui -datacenter dc1 -v "/var/lib/consul:/consul/data" -data-dir /var/lib/consul
It is a single node fresh installation with latest version from registry, so there is no upgrade or version mismatch with any agent/client happening here.
Two things to fix. First, the -v volume argument must be for docker command, not for consul command. Move it to the right place:
docker container run -v "/consul/data:/var/lib/consul" -data-dir /var/lib/consul --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -dns-port 53 -bootstrap-expect 1 -ui -datacenter dc1
Also invert them (they are /host/dir:/container/dir)
Second, by default Consul can't listen to privileged ports (i.e. 53). See this: https://www.consul.io/docs/guides/forwarding.html, so remove the -dns-port 53 and implement any approach that they recommends:
docker container run -v "/consul/data:/var/lib/consul" -data-dir /var/lib/consul --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -bootstrap-expect 1 -ui -datacenter dc1
I recommend the DNSMasq setup, it is easy to implement.
#Robert Alright, I think we also went a bit off topic here. The real issue is the message it shows and exits immidiately after that.
I tried your example and it gives the same message/error (don't think it is an error though)
[root#ip-X-X-X-X user]# docker container run --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -dns-port 53 -bootstrap-expect 1 -ui -datacenter dc1 -v "/var/lib/consul:/consul/data" -data-dir /var/lib/consul
==> Found address 'X.X.X.X' for interface 'eth0', setting bind option...
Consul v0.8.5
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
[root#ip-X-X-X-X user]# docker container ls | grep consul-server
[root#ip-10-201-14-34 user]#
Same for recursors example:
[root#ip-X.X.X.X user]# docker container run --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -dns-port 53 -bootstrap-expect 1 -ui -datacenter dc1 -v "/var/lib/consul:/consul/data" -data-dir /var/lib/consul -recursers 8.8.8.8
==> Found address 'X.X.X.X' for interface 'eth0', setting bind option...
Consul v0.8.5
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
[root#ip-X-X-X-X user]# docker container ls | grep consul-server
[root#ip-10-201-14-34 user]#
Setting Up
I am using the confluent/kafka images from docker hub to start the zookeeper and the kafka instances in two separate containers. The commands I have used to start the containers are as follows:
docker run --rm --name zookeeper -p 2181:2181 confluent/zookeeper
docker run --rm --name kafka -p 9092:9092 --link zookeeper:zookeeper confluent/kafka
And I have two containers zookeeper and kafka running now.
Note that I have mapped ports 2181 and 9092 of the containers to my host machine ports. I verified that this mapping is working by just trying localhost:2181/9092 in my browser and I get some errors printed in my running containers' terminals.
Then I created topic by issuing the following command in my host machine:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
This is successful and I verified it by listing the topics with the following command:
./bin/kafka-topics.sh --list --zookeeper localhost:2181
Now the ISSUE:
I am trying to produce some messages to the broker with the following command:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
I am getting the following exception:
[2017-03-02 20:36:02,376] WARN Failed to send producer request with correlation id 2 to broker 0 with data for partitions [test,0] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:594)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
I read some threads on the internet that suggested that I update my hosts file. If so what entry do I have to put in my hosts file??
Also some threads suggested I set the ADVERTISED_HOST entry to correct IP in the configuration file. Which configuration file??? Where do I make the update?
If it's the server.properties file used for the kafka broker then I did try going into the container created by the confluent/kafka image. It looks like this:
socket.send.buffer.bytes=102400
delete.topic.enable=true
socket.request.max.bytes=104857600
log.cleaner.enable=true
log.retention.check.interval.ms=300000
log.retention.hours=168
num.io.threads=8
broker.id=0
log4j.opts=-Dlog4j.configuration\=file\:/etc/kafka/log4j.properties
log.dirs=/var/lib/kafka
auto.create.topics.enable=true
num.network.threads=3
socket.receive.buffer.bytes=102400
log.segment.bytes=1073741824
num.recovery.threads.per.data.dir=1
num.partitions=1
zookeeper.connection.timeout.ms=6000
zookeeper.connect=zookeeper\:2181
Any suggestions how I can overcome this and resolve producing and consuming from the kafka containers possible from my host machine??
Thanks Alot!!!
I was able to figure it out within seconds of posting this question.
I had to get the HOSTNAME of the container in which the broker was running by issuing:
echo $HOSTNAME
And I updated my /etc/hosts file in my host machine with the loopback entry:
127.0.0.1 KAFKA_CONTAINER_HOSTNAME
127.0.0.1 ZOOKEEPER_CONTAINER_HOSTNAME
Had to do the same with the zookeeper container in order for the consumer also to work without an issue.
Cheers!!!