Unable to log in into HTTP authentication enabled confluent kafka control center - docker

I’m unable to log in to the control center. There are no error messages, and all docker images are working fine.
Content of control-center-jaas.properties file is:
admin: admin,admin
user: user,user
This how the webpage looks
Here is the docker compose file:
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.2.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:7.2.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
- "8091:8091"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:7.2.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
connect:
image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0
hostname: connect
container_name: connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-7.2.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:7.2.1
hostname: control-center
container_name: control-center
depends_on:
- broker
- schema-registry
- connect
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083'
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
CONTROL_CENTER_REST_LISTENERS: http://0.0.0.0:9021
CONTROL_CENTER_REST_AUTHENTICATION_METHOD: BASIC
CONTROL_CENTER_REST_AUTHENTICATION_REALM: ControlCenter
CONTROL_CENTER_REST_AUTHENTICATION_ROLES: admin,user
CONTROL_CENTER_OPTS: "-Djava.security.auth.login.config=/etc/kafka/control-center-jaas.properties"
CONTROL_CENTER_JAAS_CONFIG: |
ControlCenter {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
file="/etc/kafka/control-center-jaas.properties";
};
Reference: https://docs.confluent.io/platform/current/control-center/security/authentication.html
Thank you

Related

kafka after reconnect with docker: Connection to node could not be established. Broker may not be available

i'm trying to test a cluster's riliability with 3 kafka node 3 zookeeper and one kafka-connect, disconnecting and reconnecting some node in a random way.
The problem is happen after reconnect a node that cause a "Broker may not be available" because the port of the reconnected node is not available. If i try to restart the container of the node kafka return to work correctly.
The reconnect problem happen sometimes (is not regular) and the kafka container remain up but disconnected from the net.
This is my docker-compose file:
version: "2.0"
services:
postgres:
container_name: postgres
ports:
- '5432:5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=shipment_db
- PGPASSWORD=password
image: 'debezium/postgres:13'
postgres-dest:
container_name: postgres-dest
ports:
- '5433:5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=shipment_db
- PGPASSWORD=password
image: 'debezium/postgres:13'
zookeeper-1:
container_name: zookeeper-1
ports:
- '22181:2181'
image: 'confluentinc/cp-zookeeper:5.4.6'
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
KAFKA_JMX_PORT: 39999
JMX_PORT: 39999
ZOOKEEPER_SERVERS: "0.0.0.0:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888"
KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=*"
zookeeper-2:
container_name: zookeeper-2
ports:
- '32181:2181'
image: 'confluentinc/cp-zookeeper:5.4.6'
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
KAFKA_JMX_PORT: 39999
JMX_PORT: 39999
ZOOKEEPER_SERVERS: "zookeeper-1:22888:23888;0.0.0.0:32888:33888;zookeeper-3:42888:43888"
KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=*"
depends_on:
- zookeeper-1
zookeeper-3:
container_name: zookeeper-3
ports:
- '42181:2181'
image: 'confluentinc/cp-zookeeper:5.4.6'
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 42181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
KAFKA_JMX_PORT: 39999
JMX_PORT: 39999
ZOOKEEPER_SERVERS: "zookeeper-1:22888:23888;zookeeper-2:32888:33888;0.0.0.0:42888:43888"
KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=*"
depends_on:
- zookeeper-2
kafka-1:
container_name: kafka-1
ports:
- '29092:9092'
image: confluentinc/cp-kafka:7.0.1 # debezium/kafka:1.8.0.Final #'debezium/kafka:1.7'
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_LISTENERS: INTERNAL://kafka-1:9092,OUTSIDE://0.0.0.0:29092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-1:9092,OUTSIDE://kafka-1:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3 # For group coordinator
# https://stackoverflow.com/questions/42015158/what-is-the-difference-in-kafka-between-a-consumer-group-coordinator-and-a-consu
#KAFKA_JMX_PORT: 49999
#JMX_PORT: 49999
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
restart: always
kafka-2:
container_name: kafka-2
ports:
- '39092:9092'
image: confluentinc/cp-kafka:7.0.1 # debezium/kafka:1.8.0.Final #'debezium/kafka:1.7'
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_LISTENERS: INTERNAL://kafka-2:9092,OUTSIDE://0.0.0.0:39092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-2:9092,OUTSIDE://kafka-2:39092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
#KAFKA_JMX_PORT: 49999
#JMX_PORT: 49999
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
- kafka-1
restart: always
kafka-3:
container_name: kafka-3
ports:
- '49092:9092'
image: confluentinc/cp-kafka:7.0.1 # debezium/kafka:1.8.0.Final #'debezium/kafka:1.7'
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_LISTENERS: INTERNAL://kafka-3:9092,OUTSIDE://0.0.0.0:49092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-3:9092,OUTSIDE://kafka-3:49092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
#KAFKA_JMX_PORT: 49999
#JMX_PORT: 49999
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
- kafka-2
restart: always
connect:
image: confluentinc/cp-kafka-connect:7.0.1 #debezium/connect:1.8.0.Final # debezium/connect:1.7
hostname: connect
container_name: connect
ports:
- 8083:8083
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9092, kafka-3:9092
CONNECT_GROUP_ID: 1
CONNECT_CONFIG_STORAGE_TOPIC: my_connect_configs
CONNECT_OFFSET_STORAGE_TOPIC: my_connect_offsets
CONNECT_STATUS_STORAGE_TOPIC: my_connect_statuses
CONNECT_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9092,kafka-3:9092
CONNECT_GROUP_ID: connect-cluster-A
CONNECT_PLUGIN_PATH: /kafka/data, /kafka/connect
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: localhost
#EXTERNAL_LIBS_DIR: /kafka/external_libs,/kafka/data
CLASSPATH: /kafka/data/postgresql-42.2.19.jar # si può utilizzare anche *
#KAFKA_CONNECT_PLUGINS_DIR: /kafka/data, /kafka/connect # old version
CONNECT_PLUGIN_PATH: /kafka/data, /kafka/connect
#CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect=DEBUG,org.apache.plc4x.kafka.Plc4xSinkConnector=DEBUG"
CONNECT_LOG4J_ROOT_LOGLEVEL: DEBUG
volumes:
- type: bind
source: ./plugins
target: /kafka/data
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
- kafka-1
- kafka-2
- kafka-3
- postgres
links:
- zookeeper-1
- zookeeper-2
- zookeeper-3
- kafka-1
- kafka-2
- kafka-3
- postgres
- postgres-dest
ksqldb-server:
image: confluentinc/ksqldb-server:0.23.1
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- kafka-1
- kafka-2
- kafka-3
- zookeeper-1
- zookeeper-2
- zookeeper-3
ports:
- "8088:8088"
volumes:
- "./confluent-hub-components/:/usr/share/kafka/plugins/"
environment:
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_BOOTSTRAP_SERVERS: "kafka-1:9092,kafka-2:9092,kafka-3:9092"
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_CONNECT_URL: http://connect:8083
I'm working with last version of docker (20.10.12) and last version of docker compose (1.29.2).
Do you have some suggestions? thanks in advance.

Apache Kafka Brokers (Cluster): exited with code 1 on docker-compose

I'm learning docker and trying to setup a Kafka Cluster with 3 Zookeepers instances and 3 brokers.
Sometimes the brokers runs fine, but sometimes they are exiting with status 1 and the logs shows the following message:
kafka.common.InconsistentClusterIdException: The Cluster ID
p2ouSB1rTKyUddeL26jpxg doesn't match stored clusterId
Some(4QuK9rOmTleiRryAOA5zwA) in meta.properties. The broker is trying
to join the wrong cluster. Configured zookeeper.connect may be wrong.
I searched a lot before post this question and nothing I founded resolved the issue.
First a though it was the volumes I was using to persist Zookeeper data and logs on the host, but now that I got ride of them, I'm still getting the same error and have no idea of how to setup a Cluster Id through docker-compose and share it among all brokers. Is there a fix to this?
I'm running Docker version 20.10.10, build b485636 and Docker Compose version v2.1.1 on Windows.
This is my docker-compose.yml:
version: '3.9'
services:
zookeeper-1:
hostname: zookeeper-1
container_name: zookeeper-1
extends:
file: ./docker/services.yml
service: zookeeper
ports:
- '2181:2181'
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
zookeeper-2:
hostname: zookeeper-2
container_name: zookeeper-2
extends:
file: ./docker/services.yml
service: zookeeper
ports:
- '2182:2182'
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 2182
zookeeper-3:
hostname: zookeeper-3
container_name: zookeeper-3
extends:
file: ./docker/services.yml
service: zookeeper
ports:
- '2183:2183'
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 2183
broker-1:
container_name: broker-1
extends:
file: ./docker/services.yml
service: broker
ports:
- '19092:9092'
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper-1:2181,zookeeper-2:2182,zookeeper-3:2183'
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-1:9092,PLAINTEXT_HOST://localhost:19092
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-1:9092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
broker-2:
container_name: broker-2
extends:
file: ./docker/services.yml
service: broker
ports:
- '29092:9092'
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper-1:2181,zookeeper-2:2182,zookeeper-3:2183'
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-2:9092,PLAINTEXT_HOST://localhost:29092
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-2:9092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
broker-3:
container_name: broker-3
extends:
file: ./docker/services.yml
service: broker
ports:
- '39092:9092'
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper-1:2181,zookeeper-2:2182,zookeeper-3:2183'
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-3:9092,PLAINTEXT_HOST://localhost:39092
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-3:9092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
connect:
image: confluentinc/cp-kafka-connect:latest
hostname: connect
container_name: connect
depends_on:
- broker-1
- broker-2
- broker-3
ports:
- '8083:8083'
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker-1:9092,broker-2:9092,broker-3:9092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-6.2.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: 'io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor'
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: 'io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor'
CONNECT_PLUGIN_PATH: '/usr/share/java,/usr/share/confluent-hub-components,/connect-plugins'
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:6.2.1
hostname: control-center
container_name: control-center
depends_on:
- broker-1
- broker-2
- broker-3
- connect
ports:
- '9021:9021'
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker-1:9092,broker-2:9092,broker-3:9092'
CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083'
#CONTROL_CENTER_KSQL_KSQLDB1_URL: 'http://ksqldb-server:8088'
#CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: 'http://localhost:8088'
#CONTROL_CENTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
This is my services.yml file:
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.1
environment:
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_PEER_PORT: 2888
ZOOKEEPER_LEADER_PORT: 3888
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
broker:
image: confluentinc/cp-server:6.2.1
environment:
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
# KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

Not able to connect to kafka-connect using docker-compose file

I am using docker-compose.yml file for my Kafka setup and this is working as expected. As I am trying to connect to oracle database, I need to install ojdbc driver as well. So, I have modified my compose file to directly download ojdbc jar but after adding this code, I am not able to start Kafka-connect.
Additional code added in the file :
command:
- /bin/bash
- -c
- /
cd /usr/share/java/kafka-connect-jdbc/
curl https://maven.xwiki.org/externals/com/oracle/jdbc/ojdbc8/12.2.0.1/ojdbc8-12.2.0.1.jar -o ojdbc8-12.2.0.1.jar
/etc/confluent/docker/run
I had also tried to start the Kafka-connect container manually but it's not working as well. If you could see in the below screenshot, all services got created successfully.
Full docker-compose.yml file:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.4.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:5.4.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
schema-registry:
image: confluentinc/cp-schema-registry:5.4.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
kafka-connect:
image: confluentinc/cp-kafka-connect:5.4.0
hostname: kafka-connect
container_name: kafka-connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker:29092"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: kafka-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: kafka-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: kafka-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
command:
- /bin/bash
- -c
- /
cd /usr/share/java/kafka-connect-jdbc/
curl https://maven.xwiki.org/externals/com/oracle/jdbc/ojdbc8/12.2.0.1/ojdbc8-12.2.0.1.jar -o ojdbc8-12.2.0.1.jar
/etc/confluent/docker/run
ksqldb-server:
image: confluentinc/ksqldb-server:0.9.0
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- kafka-connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.9.0
container_name: ksqldb-cli
depends_on:
- broker
- kafka-connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
rest-proxy:
image: confluentinc/cp-kafka-rest:5.4.0
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8082:8082"
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
Ok, so this got resolved, after adding this line at bottom in the command section:
sleep infinity
sleep infinity is necessary, because we’ve sent the
/etc/confluent/docker/run process to a background thread (&) and so
the container will exit if the main command finishes.
More details here : https://rmoff.net/2018/12/15/docker-tips-and-tricks-with-ksql-and-kafka/

Not able to locate ojdbc jar in kafka-connect

I am working on to create pipeline to stream data from oracle database. I am using below compose file to UP the required services for Kafka and using command option to download the ojdbc jar. I have explicitly checked the curl command used in this compose file and I am able to download jar and this should work fine here also.
I am able to UP the stacks of all required services without any issue.
But, when I am navigating to kafka-connect-jdbc folder to look for ojdbc jar, I couldn't see there. Ideally, it should be downloaded and placed in the kafka-connect-jdbc jar.
docker-compose.yml file:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.4.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:5.4.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
schema-registry:
image: confluentinc/cp-schema-registry:5.4.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
kafka-connect:
image: confluentinc/cp-kafka-connect:5.4.0
hostname: kafka-connect
container_name: kafka-connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker:29092"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: kafka-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: kafka-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: kafka-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
command:
- /bin/bash
- -c
- /
cd /usr/share/java/kafka-connect-jdbc/
echo "fetching jar"
curl https://maven.xwiki.org/externals/com/oracle/jdbc/ojdbc8/12.2.0.1/ojdbc8-12.2.0.1.jar -o ojdbc8-12.2.0.1.jar
/etc/confluent/docker/run &
sleep infinity
ksqldb-server:
image: confluentinc/ksqldb-server:0.9.0
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- kafka-connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.9.0
container_name: ksqldb-cli
depends_on:
- broker
- kafka-connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
rest-proxy:
image: confluentinc/cp-kafka-rest:5.4.0
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8082:8082"
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
I suggest that you download the JAR outside of Docker, then volume mount it. That way, your container isn't always trying to redownload when you restart it.
mkdir jdbc-jars
curl https://maven.xwiki.org/externals/com/oracle/jdbc/ojdbc8/12.2.0.1/ojdbc8-12.2.0.1.jar -o jdbc-jars/ojdbc8-12.2.0.1.jar
...
...
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
volumes:
- ./jdbc-jars:/usr/share/java/kafka-connect-jdbc/lib
ksqldb-server:
image: confluentinc/ksqldb-server:0.9.0

ksqldb - select * from stream' results in io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor cannot be found

I gave ksqldb a try and made an docker-compose.yml like this:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.3.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.3.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
KAFKA_DELETE_TOPIC_ENABLE: 'true'
KAFKA_MESSAGE_MAX_BYTES: 10000000
KAFKA_MAX_PARTITION_FETCH_BYTES: 10000000
schema-registry:
image: confluentinc/cp-schema-registry:5.3.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
connect:
image: confluentinc/cp-server-connect:5.3.1
hostname: connect
container_name: connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
volumes:
- C:\WS\prototypes\POCONE\Werkstatt\connect-coppclark\target:/usr/share/java/coppclark
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.3.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
COMPOSE_CONVERT_WINDOWS_PATHS: 1
CONNECT_PRODUCER_MAX_REQUEST_SIZE: 10000000
CONNECT_CONSUMER_MAX_PARTITION_FETCH_BYTES: 10000000
# https://docs:ksqldb:io/en/latest/operate-and-deploy/installation/server-config/integrate-ksql-with-confluent-control-center/:
control-center:
image: confluentinc/cp-enterprise-control-center:5.3.1
hostname: control-center
container_name: control-center
depends_on:
- zookeeper
- broker
- schema-registry
- connect
- ksqldb-server
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CONTROL_CENTER_CONNECT_CLUSTER: 'connect:8083'
CONTROL_CENTER_KSQL_URL: "http://ksqldb-server:8088"
CONTROL_CENTER_KSQL_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
rest-proxy:
image: confluentinc/cp-kafka-rest:5.3.1
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
ksqldb-server:
image: confluentinc/ksqldb-server:0.6.0
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_APPLICATION_ID: "cp-all-in-one"
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: broker:29092
KSQL_HOST_NAME: ksqldb-server
KSQL_KSQL_CONNECT_URL: http://connect:8083
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.6.0
container_name: ksqldb-cli
depends_on:
- broker
- ksqldb-server
entrypoint: /bin/sh
tty: true
# coppclark- static resources
coppclark-static-http:
image: springcloudstream/coppclarkstatic:0.0.1-SNAPSHOT
container_name: coppclark-static-http
ports:
- '8080:8080'
expose:
- '8080'
then created a new (custom) connector from within ksqldb
CREATE SOURCE CONNECTOR holidayEvent WITH (
'connector.class' = 'CoppclarkSourceConnector',
'topic' = 'holidayEvent',
'url' = 'http://coppclark-static-http:8080/coppclark.csv',
'csv.parser' = true
);
which results in records like this (in topic holidayEvent):
{
"CENTER_ID": "150",
"ISO_CURRENCY_CODE": "SRD",
"ISO_COUNTRY_CODE": "SR",
"RELATED_FINANCIAL_CENTER": "Paramaribo",
"EVENT_YEAR": "2067",
"EVENT_DATE": "20671225",
"EVENT_DAY_OF_WEEK": "Sun",
"EVENT_nAME": "Christmas Day",
"FILE_TYPE": "C"
}
A stream has been made
CREATE STREAM holidayEventStream WITH(kafka_topic='holidayEvent', value_format='AVRO');
So far so good, but then when I want to select from this created Stream I got following errror:
ksql> select * from HOLIDAYEVENTSTREAM EMIT CHANGES;
Failed to construct kafka consumer
Caused by: Class
io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor cannot
be found
Caused by:
io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
I'm sure I missed something but don't know what to do next.
Regards
Rene
Remove these two lines from your ksqldb-server service:
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
The interceptors are not shipped with ksqlDB 0.6.

Resources