I have a local application with kafka and zookeeper but when i run docker compose up in my arch linux desktop kafka show this error:
container log:
kafka_1 | java.net.NoRouteToHostException: No route to host
kafka_1 | at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
kafka_1 | at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
kafka_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344)
kafka_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290)
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.18.0.2:2181.
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will not attempt to authenticate using SASL (unknown error)
kafka_1 | [main-SendThread(zookeeper:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for sever zookeeper/172.18.0.2:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException.
docker compose file:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
restart: unless-stopped
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- '2181:2181'
networks:
- domper-network
kafka:
image: confluentinc/cp-kafka:latest
restart: unless-stopped
depends_on:
- zookeeper
ports:
- '9092:9092'
- '9094:9094'
environment:
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_LISTENERS: INTERNAL://:9092,OUTSIDE://:9094
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,OUTSIDE://host.docker.internal:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
extra_hosts:
- 'host.docker.internal:172.17.0.1' #gateway do docker
networks:
- domper-network
postgres:
image: postgres
restart: unless-stopped
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: Admin#2021!
POSTGRES_DB: domper
POSTGRES_HOST_AUTH_METHOD: password
ports:
- 5432:5432
volumes:
- postgres-domper-data:/var/lib/postgresql/data
networks:
- domper-network
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
environment:
PGADMIN_DEFAULT_EMAIL: 'admin#admin.com.br'
PGADMIN_DEFAULT_PASSWORD: 'Admin#2021!'
ports:
- 16543:80
depends_on:
- postgres
networks:
- domper-network
api:
build: ./
restart: 'no'
command: bash -c "npm i && npm run migration:run && npm run seed:run && npm run start:dev"
ports:
- 8888:8888
env_file:
- dev.env
volumes:
- ./:/var/www/api
- /var/www/api/node_modules/
depends_on:
- postgres
- kafka
networks:
- domper-network
# healthcheck:
# test: ["CMD", "curl", "-f", "http://localhost:8888/healthcheck"]
# interval: 60s
# timeout: 5s
# retries: 5
notification-service:
build: ../repoDomperNotification/
restart: 'no'
command: npm run start
ports:
- 8889:8889
env_file:
- dev.env
volumes:
- ../repoDomperNotification/:/var/www/notification
- /var/www/notification/node_modules/
depends_on:
- kafka
networks:
- domper-network
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:8889/healthcheck']
interval: 60s
timeout: 5s
retries: 5
volumes:
postgres-domper-data:
driver: local
networks:
domper-network:
but in my windows desktop works and i don't know what this mean, I think it's some host configuration.
I've tried all the things I found on the internet and none of them worked.
log after #OneCricketeer (remove remove host.docker.internal and the extra_hosts) suggestion:
repodompercore-kafka-1 | ===> Running preflight checks ...
repodompercore-kafka-1 | ===> Check if /var/lib/kafka/data is writable ...
repodompercore-kafka-1 | ===> Check if Zookeeper is healthy ...
repodompercore-kafka-1 | SLF4J: Class path contains multiple SLF4J bindings.
repodompercore-kafka-1 | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
repodompercore-kafka-1 | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
repodompercore-kafka-1 | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
repodompercore-kafka-1 | SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
repodompercore-kafka-1 | log4j:WARN No appenders could be found for logger (io.confluent.admin.utils.cli.ZookeeperReadyCommand).
repodompercore-kafka-1 | log4j:WARN Please initialize the log4j system properly.
repodompercore-kafka-1 | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
repodompercore-kafka-1 exited with code 1
Related
Here's my docker-compose:
version: '3'
services:
nodered:
container_name: nodered
image: nodered/node-red
ports:
- "1880:1880"
volumes:
- ./nodered:/data
depends_on:
- mosquitto
environment:
TZ: "America/Toronto"
restart: always
mosquitto:
image: eclipse-mosquitto
container_name: mqtt
restart: always
ports:
- "1883:1883"
volumes:
- "./mosquitto/config:/mosquitto/config"
- "./mosquitto/data:/mosquitto/data"
- "./mosquitto/log:/mosquitto/log"
environment:
- TZ=America/Toronto
user: "${PUID}:${PGID}"
portainer:
ports:
- "9000:9000"
container_name: portainer
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./portainer/portainer_data:/data"
image: portainer/portainer-ce
zookeeper:
image: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
container_name: kafka
ports:
- "9092:9092"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
restart: on-failure
cmak:
image: hlebalbau/kafka-manager
container_name: cmak
restart: always
depends_on:
- kafka
- zookeeper
ports:
- "9080:9080"
environment:
- ZK_HOSTS=zookeper:2181
- APPLICATION_SECRET=letmein
command: bin/cmak -Dconfig.file=/opt/cmak/conf/application.conf -Dhttp.port=9080
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
My port 9000 is already used by Portainer and it works properly, but when I'm trying to run Kafka manager on 9080, I'm getting this error without any further explanation:
nodered | 14 Sep 21:59:41 - [info] Starting flows
nodered | 14 Sep 21:59:41 - [info] Started flows
cmak | Oops, cannot start the server.
cmak | java.lang.RuntimeException: No application loader is configured. Please configure an application loader either using the play.application.loader configuration property, or by depending on a module that configures one. You can add the Guice support module by adding "libraryDependencies += guice" to your build.sbt.
cmak | at scala.sys.package$.error(package.scala:30)
cmak | at play.api.ApplicationLoader$.play$api$ApplicationLoader$$loaderNotFound(ApplicationLoader.scala:44)
cmak | at play.api.ApplicationLoader$.apply(ApplicationLoader.scala:70)
cmak | at play.core.server.ProdServerStart$.start(ProdServerStart.scala:50)
cmak | at play.core.server.ProdServerStart$.main(ProdServerStart.scala:25)
cmak | at play.core.server.ProdServerStart.main(ProdServerStart.scala)
I have a feeling it's either my path to kafka-manager is wrong or I might have to expose the hostname on my kafka container...
I'm running ksqldb-server from a docker-compor found here https://ksqldb.io/quickstart.html#quickstart-content
My kafka bootstrap server is running on the same VM in standard alone mode.
I can see the messages in one topic with a console consumer:
sudo kafka-avro-console-consumer --from-beginning --bootstrap-server localhost:9092 --topic source-air-input --property print.key=true --max-messages 2
Unfortunatly running ksql from docker gives me this error.
ksqldb-server | [2021-07-15 23:12:58,772] ERROR Failed to start KSQL (io.confluent.ksql.rest.server.KsqlServerMain:66)
ksqldb-server | java.lang.RuntimeException: Failed to get Kafka cluster information
ksqldb-server | at io.confluent.ksql.services.KafkaClusterUtil.getKafkaClusterId(KafkaClusterUtil.java:107)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:624)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlServerMain.createExecutable(KsqlServerMain.java:152)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:59)
ksqldb-server | Caused by: java.util.concurrent.TimeoutException
ksqldb-server | at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
ksqldb-server | at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
ksqldb-server | at io.confluent.ksql.services.KafkaClusterUtil.getKafkaClusterId(KafkaClusterUtil.java:105)
My docker-compose.yml is the following.
---
version: '3.9'
services:
ksqldb-server:
image: confluentinc/ksqldb-server:0.18.0
hostname: ksqldb-server
container_name: ksqldb-server
extra_hosts:
- "host.docker.internal:host-gateway"
ports:
- "8088:8088"
environment:
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: host.docker.internal:9092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.18.0
container_name: ksqldb-cli
depends_on:
- ksqldb-server
entrypoint: /bin/sh
tty: true
I tried many possible configurations for the address without success.
What might be wrong?
I tried the suggestions from this question From inside of a Docker container, how do I connect to the localhost of the machine? without success.
Modify Kafka's server.properties
listener.security.protocol.map=PLAINTEXT_DOCKER:PLAINTEXT,PLAINTEXT_LOCAL:PLAINTEXT
listeners=PLAINTEXT_DOCKER://:29092,PLAINTEXT_LOCAL://localhost:9092
advertised.listeners=PLAINTEXT_DOCKER://host.docker.internal:29092,PLAINTEXT_LOCAL://localhost:9092
inter.broker.listener.name=PLAINTEXT_LOCAL
Update your Compose like so to point at the host rather than itself
version: '3.9'
services:
# TODO: add schema-registry
# environment:
# SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: host.docker.internal:29092
# extra_hosts:
# - "host.docker.internal:host-gateway"
# or any other Kafka client
ksqldb-server:
image: confluentinc/ksqldb-server:0.18.0
hostname: ksqldb-server
container_name: ksqldb-server
ports:
- "8088:8088"
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
KSQL_BOOTSTRAP_SERVERS: host.docker.internal:29092
...
(Tested on Mac), Getting /info endpoint of KSQL
http :8088/info
HTTP/1.1 200 OK
content-length: 133
content-type: application/json
{
"KsqlServerInfo": {
"kafkaClusterId": "ZH2-h1W_SaivCW0qa8DQGA",
"ksqlServiceId": "default_",
"serverStatus": "RUNNING",
"version": "0.18.0"
}
}
Replace all host.docker.internal above with the external hostname/IP of the machine, if Kafka is a remote server
I'm running ksqldb-server from a docker-compor found here https://ksqldb.io/quickstart.html#quickstart-content
My kafka bootstrap server is running on the same VM in standard alone mode.
I can see the messages in one topic with a console consumer:
sudo kafka-avro-console-consumer --from-beginning --bootstrap-server localhost:9092 --topic source-air-input --property print.key=true --max-messages 2
Unfortunatly running ksql from docker gives me this error.
ksqldb-server | [2021-07-15 23:12:58,772] ERROR Failed to start KSQL (io.confluent.ksql.rest.server.KsqlServerMain:66)
ksqldb-server | java.lang.RuntimeException: Failed to get Kafka cluster information
ksqldb-server | at io.confluent.ksql.services.KafkaClusterUtil.getKafkaClusterId(KafkaClusterUtil.java:107)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:624)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlServerMain.createExecutable(KsqlServerMain.java:152)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:59)
ksqldb-server | Caused by: java.util.concurrent.TimeoutException
ksqldb-server | at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
ksqldb-server | at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
ksqldb-server | at io.confluent.ksql.services.KafkaClusterUtil.getKafkaClusterId(KafkaClusterUtil.java:105)
My docker-compose.yml is the following.
---
version: '3.9'
services:
ksqldb-server:
image: confluentinc/ksqldb-server:0.18.0
hostname: ksqldb-server
container_name: ksqldb-server
extra_hosts:
- "host.docker.internal:host-gateway"
ports:
- "8088:8088"
environment:
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: host.docker.internal:9092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.18.0
container_name: ksqldb-cli
depends_on:
- ksqldb-server
entrypoint: /bin/sh
tty: true
I tried many possible configurations for the address without success.
What might be wrong?
I tried the suggestions from this question From inside of a Docker container, how do I connect to the localhost of the machine? without success.
Modify Kafka's server.properties
listener.security.protocol.map=PLAINTEXT_DOCKER:PLAINTEXT,PLAINTEXT_LOCAL:PLAINTEXT
listeners=PLAINTEXT_DOCKER://:29092,PLAINTEXT_LOCAL://localhost:9092
advertised.listeners=PLAINTEXT_DOCKER://host.docker.internal:29092,PLAINTEXT_LOCAL://localhost:9092
inter.broker.listener.name=PLAINTEXT_LOCAL
Update your Compose like so to point at the host rather than itself
version: '3.9'
services:
# TODO: add schema-registry
# environment:
# SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: host.docker.internal:29092
# extra_hosts:
# - "host.docker.internal:host-gateway"
# or any other Kafka client
ksqldb-server:
image: confluentinc/ksqldb-server:0.18.0
hostname: ksqldb-server
container_name: ksqldb-server
ports:
- "8088:8088"
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
KSQL_BOOTSTRAP_SERVERS: host.docker.internal:29092
...
(Tested on Mac), Getting /info endpoint of KSQL
http :8088/info
HTTP/1.1 200 OK
content-length: 133
content-type: application/json
{
"KsqlServerInfo": {
"kafkaClusterId": "ZH2-h1W_SaivCW0qa8DQGA",
"ksqlServiceId": "default_",
"serverStatus": "RUNNING",
"version": "0.18.0"
}
}
Replace all host.docker.internal above with the external hostname/IP of the machine, if Kafka is a remote server
We are using the curator service discovery in docker and kubernetes environments. We setup the connection string using the DNS names of the containers/pods. The problem I am seeing is that it seems to interpret these down to the IP address. The container or pod can change IP addresses and curator does not seem to pickup the change.
The behavior I see if it I standup a 3 node zookeeper cluster and stand up 1 or more agents. I then roll the zookeeper nodes 1 at a time and they each change their IP address, when I bounce the third zookeeper instance all the client lose their connection.
Is there a way to force it to always use the DNS names for connection?
Here is my compose example
version: '2.4'
x-zookeeper:
&zookeeper-env
JVMFLAGS: -Dzookeeper.4lw.commands.whitelist=ruok
ZOO_ADMINSERVER_ENABLED: 'true'
ZOO_STANDALONE_ENABLED: 'false'
ZOO_SERVERS: server.1=zookeeper1:2888:3888;2181 server.2=zookeeper2:2888:3888;2181 server.3=zookeeper3:2888:3888;2181
x-agent:
&agent-env
ZK_CONNECTION: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
SERVICE_NAME: myservice
services:
zookeeper1:
image: artifactory.rd2.thingworx.io/zookeeper:${ZOOKEEPER_IMAGE_VERSION}
restart: always
ports:
- 2181
- 8080
healthcheck:
test: echo ruok | nc localhost 2181 | grep imok
interval: 15s
environment:
<<: *zookeeper-env
ZOO_MY_ID: 1
zookeeper2:
image: artifactory.rd2.thingworx.io/zookeeper:${ZOOKEEPER_IMAGE_VERSION}
restart: always
ports:
- 2181
- 8080
healthcheck:
test: echo ruok | nc localhost 2181 | grep imok
interval: 15s
environment:
<<: *zookeeper-env
ZOO_MY_ID: 2
zookeeper3:
image: artifactory.rd2.thingworx.io/zookeeper:${ZOOKEEPER_IMAGE_VERSION}
restart: always
ports:
- 2181
- 8080
healthcheck:
test: echo ruok | nc localhost 2181 | grep imok
interval: 15s
environment:
<<: *zookeeper-env
ZOO_MY_ID: 3
agent1:
image: artifactory.rd2.thingworx.io/twxdevops/discovery-tool:latest
environment:
<<: *agent-env
GLOBAL_ID: AGENT1
agent2:
image: artifactory.rd2.thingworx.io/twxdevops/discovery-tool:latest
environment:
<<: *agent-env
GLOBAL_ID: AGENT2
agent3:
image: artifactory.rd2.thingworx.io/twxdevops/discovery-tool:latest
environment:
<<: *agent-env
GLOBAL_ID: AGENT3
agent4:
image: artifactory.rd2.thingworx.io/twxdevops/discovery-tool:latest
environment:
<<: *agent-env
GLOBAL_ID: AGENT4
agent5:
image: artifactory.rd2.thingworx.io/twxdevops/discovery-tool:latest
environment:
<<: *agent-env
GLOBAL_ID: AGENT5
The run steps are
docker-compose up -d zookeeper1 zookeeper2 zookeeper3 agent1
docker-compose rm -sf zookeeper3
docker-compose up -d agent2
docker-compose up -d zookeeper3
docker-compose rm -sf zookeeper2
docker-compose up -d agent3
docker-compose up -d zookeeper2
docker-compose rm -sf zookeeper1
docker-compose up -d agent5
docker-compose up -d zookeeper1
After I kill the last zookeeper node the agent gets the following error and does not recover. You can see it is referencing an IP address
Path:null finished:false header:: 5923,4 replyHeader:: 5923,8589934594,0 request:: '/services/myservice/cc1996fb-cca5-4108-bd06-567b45f594d7,F response:: #7b226e616d65223a226d7973657276696365222c226964223a2263633139393666622d636361352d343130382d626430362d353637623435663539346437222c2261646472657373223a223137322e32312e302e33222c22706f7274223a383038302c2273736c506f7274223a6e756c6c2c227061796c6f6164223a7b2240636c617373223a22636f6d2e7468696e67776f72782e646973636f766572792e7a6b2e53657276696365496e7374616e636544657461696c73222c2261747472696275746573223a7b22474c4f42414c4944223a224147454e5433227d7d2c22726567697374726174696f6e54696d65555443223a313634393739313735353936322c227365727669636554797065223a2244594e414d4943222c2275726953706563223a7b227061727473223a5b7b2276616c7565223a2261646472657373222c227661726961626c65223a747275657d2c7b2276616c7565223a223a222c227661726961626c65223a66616c73657d2c7b2276616c7565223a22706f7274222c227661726961626c65223a747275657d5d7d7d,s{4294967301,4294967301,1649791757073,1649791757073,0,0,0,144117976615550976,404,0,4294967301}
agent1_1 | 19:48:46.438 [ServiceEventWatcher-myservice] DEBUG com.thingworx.discovery.zk.ZookeeperProvider - ZooKeeper resolved addresses for service myservice: [ServiceDefinition [serviceName=myservice, host=172.21.0.7, port=8080, tags={GLOBALID=AGENT2}], ServiceDefinition [serviceName=myservice, host=172.21.0.4, port=8080, tags={GLOBALID=AGENT1}], ServiceDefinition [serviceName=myservice, host=172.21.0.3, port=8080, tags={GLOBALID=AGENT3}]]
agent1_1 | 19:48:47.070 [main-SendThread(172.21.0.5:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x200028941eb0001 for sever service-discovery-docker-tests_zookeeper2_1.service-discovery-docker-tests_default/172.21.0.5:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException.
agent1_1 | org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x200028941eb0001, likely server has closed socket
agent1_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
agent1_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
agent1_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1275)
agent1_1 | 19:48:47.171 [main-EventThread] INFO org.apache.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
agent1_1 | 19:48:47.363 [main-SendThread(172.21.0.9:2181)] DEBUG org.apache.zookeeper.SaslServerPrincipal - Canonicalized address to 172.21.0.9
agent1_1 | 19:48:47.363 [main-SendThread(172.21.0.9:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 172.21.0.9/172.21.0.9:2181.
agent1_1 | 19:48:47.363 [main-SendThread(172.21.0.9:2181)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will not attempt to authenticate using SASL (unknown error)
agent1_1 | 19:48:47.430 [ServiceEventWatcher-myservice] DEBUG com.thingworx.discovery.zk.ZookeeperProvider - Getting registered addresses from ZooKeeper for service myservice
Zookeeper cluster is happy and fine. So the main question is there a way to have it use the DNS names instead of the IP addresses? Should also mention that service discovery uses ephemeral nodes so disconnect and reconnect is bad.
I'm running ksqldb-server from a docker-compor found here https://ksqldb.io/quickstart.html#quickstart-content
My kafka bootstrap server is running on the same VM in standard alone mode.
I can see the messages in one topic with a console consumer:
sudo kafka-avro-console-consumer --from-beginning --bootstrap-server localhost:9092 --topic source-air-input --property print.key=true --max-messages 2
Unfortunatly running ksql from docker gives me this error.
ksqldb-server | [2021-07-15 23:12:58,772] ERROR Failed to start KSQL (io.confluent.ksql.rest.server.KsqlServerMain:66)
ksqldb-server | java.lang.RuntimeException: Failed to get Kafka cluster information
ksqldb-server | at io.confluent.ksql.services.KafkaClusterUtil.getKafkaClusterId(KafkaClusterUtil.java:107)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:624)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlServerMain.createExecutable(KsqlServerMain.java:152)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:59)
ksqldb-server | Caused by: java.util.concurrent.TimeoutException
ksqldb-server | at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
ksqldb-server | at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
ksqldb-server | at io.confluent.ksql.services.KafkaClusterUtil.getKafkaClusterId(KafkaClusterUtil.java:105)
My docker-compose.yml is the following.
---
version: '3.9'
services:
ksqldb-server:
image: confluentinc/ksqldb-server:0.18.0
hostname: ksqldb-server
container_name: ksqldb-server
extra_hosts:
- "host.docker.internal:host-gateway"
ports:
- "8088:8088"
environment:
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: host.docker.internal:9092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.18.0
container_name: ksqldb-cli
depends_on:
- ksqldb-server
entrypoint: /bin/sh
tty: true
I tried many possible configurations for the address without success.
What might be wrong?
I tried the suggestions from this question From inside of a Docker container, how do I connect to the localhost of the machine? without success.
Modify Kafka's server.properties
listener.security.protocol.map=PLAINTEXT_DOCKER:PLAINTEXT,PLAINTEXT_LOCAL:PLAINTEXT
listeners=PLAINTEXT_DOCKER://:29092,PLAINTEXT_LOCAL://localhost:9092
advertised.listeners=PLAINTEXT_DOCKER://host.docker.internal:29092,PLAINTEXT_LOCAL://localhost:9092
inter.broker.listener.name=PLAINTEXT_LOCAL
Update your Compose like so to point at the host rather than itself
version: '3.9'
services:
# TODO: add schema-registry
# environment:
# SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: host.docker.internal:29092
# extra_hosts:
# - "host.docker.internal:host-gateway"
# or any other Kafka client
ksqldb-server:
image: confluentinc/ksqldb-server:0.18.0
hostname: ksqldb-server
container_name: ksqldb-server
ports:
- "8088:8088"
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
KSQL_BOOTSTRAP_SERVERS: host.docker.internal:29092
...
(Tested on Mac), Getting /info endpoint of KSQL
http :8088/info
HTTP/1.1 200 OK
content-length: 133
content-type: application/json
{
"KsqlServerInfo": {
"kafkaClusterId": "ZH2-h1W_SaivCW0qa8DQGA",
"ksqlServiceId": "default_",
"serverStatus": "RUNNING",
"version": "0.18.0"
}
}
Replace all host.docker.internal above with the external hostname/IP of the machine, if Kafka is a remote server