I would like to use docker with several kafka brokers.
So I started some tests with this docker image => https://github.com/ches/docker-kafka
For one broker all work fine:
I start my zookeeper:
docker run -d --name zookeeper jplock/zookeeper:3.4.6
Then I start a kafka instance:
docker run -d --name kafka --link zookeeper:zookeeper ches/kafka
When I create topic and messages, all work.
Now I create a second kafka instance:
docker run -d --name kafka2 --link zookeeper:zookeeper --expose 9093 --env-file env ches/kafka
I change the exposed port to 9093 and environment variables:
PORT=9093
EXPOSED_PORT=9093
BROKER_ID=2
The broker starts well and I can create a topic with replication:
docker -D run --rm ches/kafka kafka-topics.sh --create --topic test2 --replication-factor 2 --partitions 1 --zookeeper $ZK_IP:2181
Now when I want to send some messages:
docker run --rm --interactive ches/kafka kafka-console-producer.sh --topic test2 --broker-list $KAFKA_IP:9092
I get this error:
ERROR Producer connection to 172.17.0.17:9093 unsuccessful
(kafka.producer.SyncProducer) java.net.ConnectException: Connection
refused
A docker ps give me that:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7bba0f3d0db ches/kafka:latest "kafka-console-produ About a minute ago Up About a minute 9092/tcp, 7203/tcp sick_shockley
9c475a659383 ches/kafka:latest "/start.sh" 4 minutes ago Up 4 minutes 7203/tcp, 9092/tcp, 9093/tcp kafka2
63aae4c539ab ches/kafka:latest "/start.sh" 28 minutes ago Up 28 minutes 7203/tcp, 9092/tcp kafka
ab560690e0e7 jplock/zookeeper:3.4.6 "/opt/zookeeper/bin/ 28 minutes ago Up 28 minutes 2181/tcp, 2888/tcp, 3888/tcp zookeeper
So kafka2 seems started on 9093 port.
Why I get this error ?
Thanks.
Check in the zookeeper docker what is the advertised host name of the kafka dockers. It's very possible that they registered their docker hash as host name (as it's the result of getInetAdress() within a docker container) instead of a resolvable address.
If that's the case, editing your standard kafka config to change advertised.host.name should solve your problem (it's a bit annoying because you have to change it at start, but you can for example fetch it from the /etc/hosts file of the docker container at startup, it should be the first half of the first line in it).
Related
I'm running a jenkins and keycloak using docker containers: this is docker ps output:
2a2daea22016 jboss/keycloak "/opt/jboss/tools/do…" 3 hours ago Up 3 hours 8080/tcp, 8443/tcp, 0.0.0.0:8090->8090/tcp, :::8090->8090/tcp keycloak
7184ee9a295 jenkins/jenkins "/sbin/tini -- /usr/…" 24 hours ago Up 3 hours 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp, 50000/tcp jenkins-master
i used these commands to run both jenkins and keycloak
docker run --name keycloak -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -p 8090:8090 jboss/keycloak -Djboss.http.port=8090
docker run -p 8080:8080 --name=jenkins-master jenkins/jenkins
when i put
http://localhost:8090/auth/realms/jenkins/.well-known/openid-configuration
i took this from the keycloak realm setting that i created , when i put it in openid configuration endpoint in jenkins it gives java.net.ConnectException: Connection refused (Connection refused)
it works fine when i run jenkins and keycloak on localhost without containers , but when i work with containers it gives this error
i tried changing require SSL to none instead of external requests in the client login(keycloak) but it still doesnt work
I have the following setup:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eab42051ca26 web-www:20180804 "node run.js" 8 minutes ago Up 8 minutes 3000/tcp web-www
63ec48e93a77 jwilder/nginx-proxy:latest "/app/docker-entrypo…" 9 hours ago Up 9 hours 0.0.0.0:80->80/tcp nginx-proxy-server
463ffd55260b fiorix/freegeoip "/go/bin/freegeoip" 9 hours ago Up 9 hours 8080/tcp freegeoip
bdc702c370ec euvat "/usr/local/bin/euva…" 9 hours ago Up 9 hours 3000/tcp euvat
40c07de732fa redis:4.0.10 "docker-entrypoint.s…" 9 hours ago Up 9 hours 6379/tcp redis-www
76831834f59d mongo:4.0 "docker-entrypoint.s…" 9 hours ago Up 9 hours 27017/tcp mongo-www
where my web-www node.js app connects to redis and mongo via the
NETWORK ID NAME DRIVER SCOPE
74d8f38aca38 bridge bridge local
1c894a7fa176 host host local
ca02c5ccac55 network-www bridge local
7226d9cc5360 none null local
my run.sh file is like:
OLDAPP="$(docker ps --all --quiet --filter=name="$APP")"
if [ -n "$OLDAPP" ]; then
docker stop $OLDAPP && docker rm $OLDAPP
fi
docker run --name web-www \
--network network-www \
--link euvat:euvat \
--link freegeoip:freegeoip \
--env VIRTUAL_HOST=araweelo.local \
--env-file /env/web-www.env \
web-www:20180804.182446
so, now i am starting a new development stack dev-www for example, so i will create the network-dev, launch redis-dev and mongo-dev but want to share the euvat and freegeoip containers with the web-www container.
is this the correct way to do this or is there an alternative method?
any advice is much appreciated.
Docker links are deprecated and maybe removed soon.
It's better to create the networks, containers ahead of time and join the container to the network
docker network create network-www
docker run --name web-www \
--env VIRTUAL_HOST=araweelo.local \
--env-file /env/web-www.env \
web-www:20180804.182446
docker network connect network-www web-www
docker network connect network-www euvat
docker network connect network-www freegeoip
This above commands will create a network-www user-defined bridge network and connect euvat, web-www and freegeoip containers to that network.
Replace/add containers as required. Might be a better idea to write a compose file which brings up the containers in a single command
I start a docker container to run a Kafka server with
docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=192.168.99.100 --env ADVERTISED_PORT=9092 spotify/kafka
I find the IP address of the Docker container. This is 172.17.0.2 and I can ping this address.
Now I want a producer that sends messages:
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='172.17.0.2:9092')
for i in range(100):
producer.send('foobar', b'hola')
producer.close()
However this gives:
kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Failed to update metadata after 60.0 secs.
How to solve this?
Had the same error but because my topic name wasn't right/set, same as python_noob.
I am trying to make a container for consul and it keeps failing with this output, funny, I don't really think it is an error
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
following is the command I am using:
docker container run --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -dns-port 53 -bootstrap-expect 1 -ui -datacenter dc1 -v "/var/lib/consul:/consul/data" -data-dir /var/lib/consul
It is a single node fresh installation with latest version from registry, so there is no upgrade or version mismatch with any agent/client happening here.
Two things to fix. First, the -v volume argument must be for docker command, not for consul command. Move it to the right place:
docker container run -v "/consul/data:/var/lib/consul" -data-dir /var/lib/consul --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -dns-port 53 -bootstrap-expect 1 -ui -datacenter dc1
Also invert them (they are /host/dir:/container/dir)
Second, by default Consul can't listen to privileged ports (i.e. 53). See this: https://www.consul.io/docs/guides/forwarding.html, so remove the -dns-port 53 and implement any approach that they recommends:
docker container run -v "/consul/data:/var/lib/consul" -data-dir /var/lib/consul --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -bootstrap-expect 1 -ui -datacenter dc1
I recommend the DNSMasq setup, it is easy to implement.
#Robert Alright, I think we also went a bit off topic here. The real issue is the message it shows and exits immidiately after that.
I tried your example and it gives the same message/error (don't think it is an error though)
[root#ip-X-X-X-X user]# docker container run --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -dns-port 53 -bootstrap-expect 1 -ui -datacenter dc1 -v "/var/lib/consul:/consul/data" -data-dir /var/lib/consul
==> Found address 'X.X.X.X' for interface 'eth0', setting bind option...
Consul v0.8.5
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
[root#ip-X-X-X-X user]# docker container ls | grep consul-server
[root#ip-10-201-14-34 user]#
Same for recursors example:
[root#ip-X.X.X.X user]# docker container run --net host --name consul-server -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e CONSUL_BIND_INTERFACE='eth0' consul agent -server -client 0.0.0.0 -dns-port 53 -bootstrap-expect 1 -ui -datacenter dc1 -v "/var/lib/consul:/consul/data" -data-dir /var/lib/consul -recursers 8.8.8.8
==> Found address 'X.X.X.X' for interface 'eth0', setting bind option...
Consul v0.8.5
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
[root#ip-X-X-X-X user]# docker container ls | grep consul-server
[root#ip-10-201-14-34 user]#
Setting Up
I am using the confluent/kafka images from docker hub to start the zookeeper and the kafka instances in two separate containers. The commands I have used to start the containers are as follows:
docker run --rm --name zookeeper -p 2181:2181 confluent/zookeeper
docker run --rm --name kafka -p 9092:9092 --link zookeeper:zookeeper confluent/kafka
And I have two containers zookeeper and kafka running now.
Note that I have mapped ports 2181 and 9092 of the containers to my host machine ports. I verified that this mapping is working by just trying localhost:2181/9092 in my browser and I get some errors printed in my running containers' terminals.
Then I created topic by issuing the following command in my host machine:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
This is successful and I verified it by listing the topics with the following command:
./bin/kafka-topics.sh --list --zookeeper localhost:2181
Now the ISSUE:
I am trying to produce some messages to the broker with the following command:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
I am getting the following exception:
[2017-03-02 20:36:02,376] WARN Failed to send producer request with correlation id 2 to broker 0 with data for partitions [test,0] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:594)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
I read some threads on the internet that suggested that I update my hosts file. If so what entry do I have to put in my hosts file??
Also some threads suggested I set the ADVERTISED_HOST entry to correct IP in the configuration file. Which configuration file??? Where do I make the update?
If it's the server.properties file used for the kafka broker then I did try going into the container created by the confluent/kafka image. It looks like this:
socket.send.buffer.bytes=102400
delete.topic.enable=true
socket.request.max.bytes=104857600
log.cleaner.enable=true
log.retention.check.interval.ms=300000
log.retention.hours=168
num.io.threads=8
broker.id=0
log4j.opts=-Dlog4j.configuration\=file\:/etc/kafka/log4j.properties
log.dirs=/var/lib/kafka
auto.create.topics.enable=true
num.network.threads=3
socket.receive.buffer.bytes=102400
log.segment.bytes=1073741824
num.recovery.threads.per.data.dir=1
num.partitions=1
zookeeper.connection.timeout.ms=6000
zookeeper.connect=zookeeper\:2181
Any suggestions how I can overcome this and resolve producing and consuming from the kafka containers possible from my host machine??
Thanks Alot!!!
I was able to figure it out within seconds of posting this question.
I had to get the HOSTNAME of the container in which the broker was running by issuing:
echo $HOSTNAME
And I updated my /etc/hosts file in my host machine with the loopback entry:
127.0.0.1 KAFKA_CONTAINER_HOSTNAME
127.0.0.1 ZOOKEEPER_CONTAINER_HOSTNAME
Had to do the same with the zookeeper container in order for the consumer also to work without an issue.
Cheers!!!