I have this setup that has been working fine since last year December which suddenly refused to work.
I have this docker-compose yaml file like this:
version: "3.8"
services:
zookeeper1:
image: debezium/zookeeper:1.8
container_name: zookeeper1
ports:
- 2181:2181
networks:
- internalnet
kafka1:
image: debezium/kafka:1.8
container_name: kafka1
ports:
- 9092:9092
depends_on:
- zookeeper1
environment:
- KAFKA_BROKER_ID=100
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181
- KAFKA_ADVERTISED_HOST_NAME=kafka1
- KAFKA_LISTENERS=LISTENER_BOB://kafka1:29092,LISTENER_FRED://localhost:9092
- KAFKA_ADVERTISED_LISTENERS=LISTENER_BOB://kafka1:29092,LISTENER_FRED://localhost:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=LISTENER_BOB:PLAINTEXT,LISTENER_FRED:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=LISTENER_BOB
- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS=60000
networks:
- internalnet
volumes:
- ./kafka/kafka1/kafka_data:/kafka/data
- ./kafka/kafka1/kafka_logs:/kafka/logs
networks:
internalnet:
driver: bridge
Zookeeper is running ok but kafka fails to run with the following log:
WARNING: Using default NODE_ID=1, which is valid only for non-clustered installations.
Starting in ZooKeeper mode using NODE_ID=1.
Using ZOOKEEPER_CONNECT=0.0.0.0:2181
Using configuration config/server.properties.
Using KAFKA_LISTENERS=LISTENER_BOB://kafka1:29092,LISTENER_FRED://localhost:9092 and KAFKA_ADVERTISED_LISTENERS=LISTENER_BOB://kafka1:29092,LISTENER_FRED://localhost:9092
2022-09-16 16:26:57,844 - INFO [main:Log4jControllerRegistration$#31] - Registered kafka:type=kafka.Log4jController MBean
2022-09-16 16:26:58,521 - INFO [main:X509Util#77] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2022-09-16 16:26:58,667 - INFO [main:LoggingSignalHandler#72] - Registered signal handlers for TERM, INT, HUP
2022-09-16 16:26:58,674 - INFO [main:Logging#66] - starting
2022-09-16 16:26:58,678 - INFO [main:Logging#66] - Connecting to zookeeper on 0.0.0.0:2181
2022-09-16 16:26:58,719 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Initializing a new session to 0.0.0.0:2181.
2022-09-16 16:26:58,733 - INFO [main:Environment#98] - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT
2022-09-16 16:26:58,734 - INFO [main:Environment#98] - Client environment:host.name=44841d8b6caa
2022-09-16 16:26:58,734 - INFO [main:Environment#98] - Client environment:java.version=11.0.14.1
2022-09-16 16:26:58,734 - INFO [main:Environment#98] - Client environment:java.vendor=Red Hat, Inc.
2022-09-16 16:26:58,734 - INFO [main:Environment#98] - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-11.0.14.1.1-5.fc34.x86_64
2022-09-16 16:26:58,735 - INFO [main:Environment#98] - Client environment:java.class.path=/kafka/libs/activation-1.1.1.jar:/kafka/libs/aopalliance-repackaged-2.6.1.jar:/kafka/libs/argparse4j-0.7.0.jar:/kafka/libs/audience-annotations-0.5.0.jar:/kafka/libs/commons-cli-1.4.jar:/kafka/libs/commons-lang3-3.8.1.jar:/kafka/libs/connect-api-3.0.0.jar:/kafka/libs/connect-basic-auth-extension-3.0.0.jar:/kafka/libs/connect-file-3.0.0.jar:/kafka/libs/connect-json-3.0.0.jar:/kafka/libs/connect-mirror-3.0.0.jar:/kafka/libs/connect-mirror-client-3.0.0.jar:/kafka/libs/connect-runtime-3.0.0.jar:/kafka/libs/connect-transforms-3.0.0.jar:/kafka/libs/hk2-api-2.6.1.jar:/kafka/libs/hk2-locator-2.6.1.jar:/kafka/libs/hk2-utils-2.6.1.jar:/kafka/libs/jackson-annotations-2.12.3.jar:/kafka/libs/jackson-core-2.12.3.jar:/kafka/libs/jackson-databind-2.12.3.jar:/kafka/libs/jackson-dataformat-csv-2.12.3.jar:/kafka/libs/jackson-datatype-jdk8-2.12.3.jar:/kafka/libs/jackson-jaxrs-base-2.12.3.jar:/kafka/libs/jackson-jaxrs-json-provider-2.12.3.jar:/kafka/libs/jackson-module-jaxb-annotations-2.12.3.jar:/kafka/libs/jackson-module-scala_2.12-2.12.3.jar:/kafka/libs/jakarta.activation-api-1.2.1.jar:/kafka/libs/jakarta.annotation-api-1.3.5.jar:/kafka/libs/jakarta.inject-2.6.1.jar:/kafka/libs/jakarta.validation-api-2.0.2.jar:/kafka/libs/jakarta.ws.rs-api-2.1.6.jar:/kafka/libs/jakarta.xml.bind-api-2.3.2.jar:/kafka/libs/javassist-3.27.0-GA.jar:/kafka/libs/javax.servlet-api-3.1.0.jar:/kafka/libs/javax.ws.rs-api-2.1.1.jar:/kafka/libs/jaxb-api-2.3.0.jar:/kafka/libs/jersey-client-2.34.jar:/kafka/libs/jersey-common-2.34.jar:/kafka/libs/jersey-container-servlet-2.34.jar:/kafka/libs/jersey-container-servlet-core-2.34.jar:/kafka/libs/jersey-hk2-2.34.jar:/kafka/libs/jersey-server-2.34.jar:/kafka/libs/jetty-client-9.4.43.v20210629.jar:/kafka/libs/jetty-continuation-9.4.43.v20210629.jar:/kafka/libs/jetty-http-9.4.43.v20210629.jar:/kafka/libs/jetty-io-9.4.43.v20210629.jar:/kafka/libs/jetty-security-9.4.43.v20210629.jar:/kafka/libs/jetty-server-9.4.43.v20210629.jar:/kafka/libs/jetty-servlet-9.4.43.v20210629.jar:/kafka/libs/jetty-servlets-9.4.43.v20210629.jar:/kafka/libs/jetty-util-9.4.43.v20210629.jar:/kafka/libs/jetty-util-ajax-9.4.43.v20210629.jar:/kafka/libs/jline-3.12.1.jar:/kafka/libs/jopt-simple-5.0.4.jar:/kafka/libs/kafka-clients-3.0.0.jar:/kafka/libs/kafka-log4j-appender-3.0.0.jar:/kafka/libs/kafka-metadata-3.0.0.jar:/kafka/libs/kafka-raft-3.0.0.jar:/kafka/libs/kafka-server-common-3.0.0.jar:/kafka/libs/kafka-shell-3.0.0.jar:/kafka/libs/kafka-storage-3.0.0.jar:/kafka/libs/kafka-storage-api-3.0.0.jar:/kafka/libs/kafka-streams-3.0.0.jar:/kafka/libs/kafka-streams-examples-3.0.0.jar:/kafka/libs/kafka-streams-scala_2.12-3.0.0.jar:/kafka/libs/kafka-streams-test-utils-3.0.0.jar:/kafka/libs/kafka-tools-3.0.0.jar:/kafka/libs/kafka_2.12-3.0.0.jar:/kafka/libs/log4j-1.2.17.jar:/kafka/libs/lz4-java-1.7.1.jar:/kafka/libs/maven-artifact-3.8.1.jar:/kafka/libs/metrics-core-2.2.0.jar:/kafka/libs/metrics-core-4.1.12.1.jar:/kafka/libs/netty-buffer-4.1.62.Final.jar:/kafka/libs/netty-codec-4.1.62.Final.jar:/kafka/libs/netty-common-4.1.62.Final.jar:/kafka/libs/netty-handler-4.1.62.Final.jar:/kafka/libs/netty-resolver-4.1.62.Final.jar:/kafka/libs/netty-transport-4.1.62.Final.jar:/kafka/libs/netty-transport-native-epoll-4.1.62.Final.jar:/kafka/libs/netty-transport-native-unix-common-4.1.62.Final.jar:/kafka/libs/osgi-resource-locator-1.0.3.jar:/kafka/libs/paranamer-2.8.jar:/kafka/libs/plexus-utils-3.2.1.jar:/kafka/libs/reflections-0.9.12.jar:/kafka/libs/rocksdbjni-6.19.3.jar:/kafka/libs/scala-collection-compat_2.12-2.4.4.jar:/kafka/libs/scala-java8-compat_2.12-1.0.0.jar:/kafka/libs/scala-library-2.12.14.jar:/kafka/libs/scala-logging_2.12-3.9.3.jar:/kafka/libs/scala-reflect-2.12.14.jar:/kafka/libs/slf4j-api-1.7.30.jar:/kafka/libs/slf4j-log4j12-1.7.30.jar:/kafka/libs/snappy-java-1.1.8.1.jar:/kafka/libs/trogdor-3.0.0.jar:/kafka/libs/zookeeper-3.6.3.jar:/kafka/libs/zookeeper-jute-3.6.3.jar:/kafka/libs/zstd-jni-1.5.0-2.jar
2022-09-16 16:26:58,740 - INFO [main:Environment#98] - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
2022-09-16 16:26:58,745 - INFO [main:Environment#98] - Client environment:java.io.tmpdir=/tmp
2022-09-16 16:26:58,745 - INFO [main:Environment#98] - Client environment:java.compiler=<NA>
2022-09-16 16:26:58,745 - INFO [main:Environment#98] - Client environment:os.name=Linux
2022-09-16 16:26:58,748 - INFO [main:Environment#98] - Client environment:os.arch=amd64
2022-09-16 16:26:58,748 - INFO [main:Environment#98] - Client environment:os.version=5.10.16.3-microsoft-standard-WSL2
2022-09-16 16:26:58,748 - INFO [main:Environment#98] - Client environment:user.name=kafka
2022-09-16 16:26:58,748 - INFO [main:Environment#98] - Client environment:user.home=/kafka
2022-09-16 16:26:58,749 - INFO [main:Environment#98] - Client environment:user.dir=/kafka
2022-09-16 16:26:58,749 - INFO [main:Environment#98] - Client environment:os.memory.free=975MB
2022-09-16 16:26:58,749 - INFO [main:Environment#98] - Client environment:os.memory.max=1024MB
2022-09-16 16:26:58,749 - INFO [main:Environment#98] - Client environment:os.memory.total=1024MB
2022-09-16 16:26:58,754 - INFO [main:ZooKeeper#1006] - Initiating client connection, connectString=0.0.0.0:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#3fc79729
2022-09-16 16:26:58,782 - INFO [main:ClientCnxnSocket#239] - jute.maxbuffer value is 4194304 Bytes
2022-09-16 16:26:58,797 - INFO [main:ClientCnxn#1736] - zookeeper.request.timeout value is 0. feature enabled=false
2022-09-16 16:26:58,807 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Waiting until connected.
2022-09-16 16:26:58,840 - INFO [main-SendThread(0.0.0.0:2181):ClientCnxn$SendThread#1181] - Opening socket connection to server 0.0.0.0/0.0.0.0:2181.
2022-09-16 16:26:58,842 - INFO [main-SendThread(0.0.0.0:2181):ClientCnxn$SendThread#1183] - SASL config status: Will not attempt to authenticate using SASL (unknown error)
2022-09-16 16:26:58,861 - WARN [main-SendThread(0.0.0.0:2181):ClientCnxn$SendThread#1300] - Session 0x0 for sever 0.0.0.0/0.0.0.0:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290)
Also got the following log from kafka server.log
2022-09-16 14:26:52,915 - INFO [main:Log4jControllerRegistration$#31] - Registered kafka:type=kafka.Log4jController MBean
2022-09-16 14:26:54,942 - INFO [main:Logging#66] - starting
2022-09-16 14:26:54,965 - INFO [main:Logging#66] - Connecting to zookeeper on 0.0.0.0:2181
2022-09-16 14:26:55,082 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Initializing a new session to 0.0.0.0:2181.
2022-09-16 14:26:55,341 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Waiting until connected.
2022-09-16 14:27:55,367 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closing.
2022-09-16 14:27:55,758 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closed.
2022-09-16 14:27:55,797 - ERROR [main:MarkerIgnoringBase#159] - Fatal error during KafkaServer startup. Prepare to shutdown
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:254)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:250)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:108)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1981)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:457)
at kafka.server.KafkaServer.startup(KafkaServer.scala:196)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
2022-09-16 14:27:55,813 - INFO [main:Logging#66] - shutting down
2022-09-16 14:27:55,858 - INFO [main:Logging#66] - shut down completed
2022-09-16 14:27:55,861 - ERROR [main:MarkerIgnoringBase#143] - Exiting Kafka.
2022-09-16 14:27:55,864 - INFO [kafka-shutdown-hook:Logging#66] - shutting down
2022-09-16 14:42:16,757 - INFO [main:Log4jControllerRegistration$#31] - Registered kafka:type=kafka.Log4jController MBean
2022-09-16 14:42:18,622 - INFO [main:Logging#66] - starting
2022-09-16 14:42:18,624 - INFO [main:Logging#66] - Connecting to zookeeper on 0.0.0.0:2181
2022-09-16 14:42:18,656 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Initializing a new session to 0.0.0.0:2181.
2022-09-16 14:42:18,749 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Waiting until connected.
2022-09-16 14:43:18,769 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closing.
2022-09-16 14:43:19,784 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closed.
2022-09-16 14:43:19,796 - ERROR [main:MarkerIgnoringBase#159] - Fatal error during KafkaServer startup. Prepare to shutdown
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:254)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:250)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:108)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1981)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:457)
at kafka.server.KafkaServer.startup(KafkaServer.scala:196)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
2022-09-16 14:43:19,809 - INFO [main:Logging#66] - shutting down
2022-09-16 14:43:19,858 - INFO [main:Logging#66] - shut down completed
2022-09-16 14:43:19,870 - ERROR [main:MarkerIgnoringBase#143] - Exiting Kafka.
2022-09-16 14:43:19,876 - INFO [kafka-shutdown-hook:Logging#66] - shutting down
2022-09-16 14:53:57,029 - INFO [main:Log4jControllerRegistration$#31] - Registered kafka:type=kafka.Log4jController MBean
2022-09-16 14:53:59,011 - INFO [main:Logging#66] - starting
2022-09-16 14:53:59,017 - INFO [main:Logging#66] - Connecting to zookeeper on 0.0.0.0:2181
2022-09-16 14:53:59,115 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Initializing a new session to 0.0.0.0:2181.
2022-09-16 14:53:59,247 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Waiting until connected.
2022-09-16 14:54:59,256 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closing.
2022-09-16 14:55:00,389 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closed.
2022-09-16 14:55:00,397 - ERROR [main:MarkerIgnoringBase#159] - Fatal error during KafkaServer startup. Prepare to shutdown
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:254)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:250)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:108)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1981)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:457)
at kafka.server.KafkaServer.startup(KafkaServer.scala:196)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
2022-09-16 14:55:00,400 - INFO [main:Logging#66] - shutting down
2022-09-16 14:55:00,491 - INFO [main:Logging#66] - shut down completed
2022-09-16 14:55:00,525 - ERROR [main:MarkerIgnoringBase#143] - Exiting Kafka.
2022-09-16 14:55:00,529 - INFO [kafka-shutdown-hook:Logging#66] - shutting down
I even went as far as uninstalling docker-desktop and re-installing, still the same issue.
Extra info: i am running docker-desktop (with wsl-2 on ubuntu distro) on windows 11.
It's possible Debezium did an update which broke your setup, so I suggest you grab a latest compose file, many of which exist at
https://github.com/debezium/debezium-examples
Look at the logs,
Using ZOOKEEPER_CONNECT=0.0.0.0:2181
It's not using KAFKA_ZOOKEEPER_CONNECT ... Remove the KAFKA_ prefix to set the appropriate value, then your logs should say something like ZOOKEEPER_CONNECT=zookeeper1:2181
Related
I'm working on Mac with Docker Desktop. I'm trying to run wurstmeister/kafka from docker compose and connect a producer to it.
This is my docker-compose.yml:
version: '3.8'
services:
zookeeper:
container_name: zookeeper
image: zookeeper:3.7.0
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zookeeper:2888:3888;2181
restart: on-failure
kafka:
container_name: kafka
image: wurstmeister/kafka:2.13-2.7.0
ports:
- "9092:9092"
environment:
KAFKA_LISTENERS: INTERNAL://kafka:19092,EXTERNAL://localhost:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:19092,EXTERNAL://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_BROKER_ID: 1
restart: on-failure
depends_on:
- zookeeper
Then I have producer connecting to localhost:9092 and sending a simple message. The producer works fine - tested with another kafka image confluentinc/cp-kafka:6.2.0.
When I try to use producer with wurstmeister/kafka I'm getting a lot of this errors:
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=simple-producer] Created socket with SO_RCVBUF = 326640, SO_SNDBUF = 146988, SO_TIMEOUT = 0 to node -1
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Completed connection to node -1. Fetching API versions.
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Initiating API versions fetch from node -1.
22:07:13.422 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=simple-producer, correlationId=20) and timeout 30000 to node -1: {client_software_name=apache-kafka-java,client_software_version=2.7.0,_tagged_fields={}}
22:07:13.423 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=simple-producer] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.base/java.lang.Thread.run(Thread.java:829)
22:07:13.424 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Node -1 disconnected.
22:07:13.424 [kafka-producer-network-thread | simple-producer] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
Why this happens? What is the cause of this error? And how can make it work?
EDIT: added kafka container logs
Kafka container last logs below and no new logs added when I try to connect producer:
[2021-09-09 21:07:28,227] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser)
[2021-09-09 21:07:28,230] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.common.utils.AppInfoParser)
[2021-09-09 21:07:28,231] INFO Kafka startTimeMs: 1631221648211 (org.apache.kafka.common.utils.AppInfoParser)
[2021-09-09 21:07:28,238] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2021-09-09 21:07:28,365] INFO [broker-1-to-controller-send-thread]: Recorded new controller, from now on will use broker 1 (kafka.server.BrokerToControllerRequestThread)
I have been working on Hyperledger fabric 2.0 Multi-Org Networking running under default ports. The setup is as follows:
Org1 ( Peer0:7051, Peer1:8051, CA: 7054 ,couchdb0:5984, couchdb1:6984:5984)
Org2 ( Peer0:9051, Peer1:10051, CA: 8054,couchdb2:7984:5984, couchdb3:8984:5984)
Orderer (0rderer1:7050, Orderer2:8050, Orderer3: 9050) RAFT Mechanism
The requirement is to redefine all the container ports mentioned above so that I can run the same Fabric application as two environments ( One for Testing(Stable version) and one for Development )
I tried to change the ports (Specifying environmental variables for ports in docker-compose) of Peers, orderers, CA. But I don't have any option for the CouchDB which always has the default port(5984)
Is there any way to achieve this? so that it will also be helpful in running two different fabric applications in the same virtual machine
EDIT1:
My docker-compose.yaml file (I have only mentioned for- Org1(Peer0,peer1), Orderer1,ca-org1, couchdb0,couchdb1)
version: "2"
networks:
test2:
services:
ca-org1:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.org1.test.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/priv_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-tls/tlsca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-tls/priv_sk
ports:
- "3054:3054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/ca/:/etc/hyperledger/fabric-ca-server-config
- ./channel/crypto-config/peerOrganizations/org1.test.com/tlsca/:/etc/hyperledger/fabric-ca-server-tls
container_name: ca.org1.test.com
hostname: ca.org1.test.com
networks:
- test2
orderer.test.com:
container_name: orderer.test.com
image: hyperledger/fabric-orderer:2.1
dns_search: .
environment:
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_METRICS_PROVIDER=prometheus
- ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:3443
- ORDERER_GENERAL_LISTENPORT=3050
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 3050:3050
- 3443:3443
networks:
- test2
volumes:
- ./channel/genesis.block:/var/hyperledger/orderer/genesis.block
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/msp:/var/hyperledger/orderer/msp
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/tls:/var/hyperledger/orderer/tls
couchdb0:
container_name: couchdb0-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 1984:1984
networks:
- test2
couchdb1:
container_name: couchdb1-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 2984:1984
networks:
- test2
peer0.org1.test.com:
container_name: peer0.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer0.org1.test.com
- CORE_PEER_ADDRESS=peer0.org1.test.com:3051
- CORE_PEER_LISTENADDRESS=0.0.0.0:3051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.test.com:3052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:3052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.test.com:3051
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
depends_on:
- couchdb0
ports:
- 3051:3051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
peer1.org1.test.com:
container_name: peer1.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=debug
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer1.org1.test.com
- CORE_PEER_ADDRESS=peer1.org1.test.com:4051
- CORE_PEER_LISTENADDRESS=0.0.0.0:4051
- CORE_PEER_CHAINCODEADDRESS=peer1.org1.test.com:4052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:4052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.test.com:3051
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
ports:
- 4051:4051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
Thanks for the suggestions regarding couchDB. I had a thought that we should only specify the default couchDB port each instance. Anyway I missed the step of changing the container name in the first place (default peer0.org1.example.com to peer0.org1.test.com) I was able to start the docker containers with new container names so that it doesn't stop(recreate) the existing containers which is already running on the actual ports.
The issue which I am facing now is peer is not able to communicate with the couchdb-test url
U 04c Entering VerifyCouchConfig()
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04d Entering handleRequest() method=GET url=http://couchdb1-test:1984/ dbName=
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04e Request URL: http://couchdb1-test:1984/
2020-08-12 11:22:45.011 UTC [couchdb] handleRequest -> WARN 04f Retrying couchdb request in 125ms. Attempt:1 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.137 UTC [couchdb] handleRequest -> WARN 050 Retrying couchdb request in 250ms. Attempt:2 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.389 UTC [couchdb] handleRequest -> WARN 051 Retrying couchdb request in 500ms. Attempt:3 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
Hence if I try to create a channel, peer container exits even though it was running till now and it's not able to join the channel
2020-08-12 10:58:29.264 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:29.301 UTC [cli.common] readBlock -> INFO 002 Expect block, but got status: &{NOT_FOUND}
2020-08-12 10:58:29.305 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2020-08-12 10:58:29.506 UTC [cli.common] readBlock -> INFO 004 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.509 UTC [channelCmd] InitCmdFactory -> INFO 005 Endorser and orderer connections initialized
2020-08-12 10:58:29.710 UTC [cli.common] readBlock -> INFO 006 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.713 UTC [channelCmd] InitCmdFactory -> INFO 007 Endorser and orderer connections initialized
2020-08-12 10:58:29.916 UTC [cli.common] readBlock -> INFO 008 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.922 UTC [channelCmd] InitCmdFactory -> INFO 009 Endorser and orderer connections initialized
2020-08-12 10:58:30.123 UTC [cli.common] readBlock -> INFO 00a Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.126 UTC [channelCmd] InitCmdFactory -> INFO 00b Endorser and orderer connections initialized
2020-08-12 10:58:30.327 UTC [cli.common] readBlock -> INFO 00c Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.331 UTC [channelCmd] InitCmdFactory -> INFO 00d Endorser and orderer connections initialized
2020-08-12 10:58:30.534 UTC [cli.common] readBlock -> INFO 00e Received block: 0
Error: error getting endorser client for channel: endorser client failed to connect to localhost:3051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:53668->127.0.0.1:3051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:4051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:60724->127.0.0.1:4051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:5051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:57948->127.0.0.1:5051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:6051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:58976->127.0.0.1:6051: read: connection reset by peer"
2020-08-12 10:58:37.518 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.552 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
2020-08-12 10:58:37.685 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.763 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
Here, only the Orderers are successfully added to the channel but not the peers even after changing the ports.
This isnt an issue, you can just specify it as you did for the others like this. Are you facing some specific issues while mapping the ports
ports:
- 6984:5984 # Mapping Host Port to Container Port
You can change the couchDb port from the docker-compose file.
Showing a snippet from docekr-compose.yaml file.
couchdb0:
container_name: couchdb0
image: couchdb:2.3
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
# Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
# for example map it to utilize Fauxton User Interface in dev environments.
ports:
- "5984:5984"
networks:
- byfn
From here you can change ports easily.
I am trying to write data on kafka topic but, stuck with some errors. Below are my configuration & error details.
Kubernetes Service:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-service NodePort 10.105.214.246 <none> 9092:30998/TCP 17m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
zoo1 ClusterIP 10.101.3.128 <none> 2181/TCP,2888/TCP,3888/TCP 20m
Kubernetes Pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-broker0-69c97b67f-4pmw9 1/1 Running 1 1m
zookeeper-deployment-1-796f9d9bcc-cr756 1/1 Running 0 20m
Kafka Docker Process:
docker ps | grep kafka
f79cd0196083 wurstmeister/kafka#sha256:d04dafd2b308f26dbeed8454f67c321579c2818c1eff5e8f695e14a19b1d599b "start-kafka.sh" About a minute ago Up About a minute k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1
75393e9e25c1 k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_0
Topic test is created successfully in Kafka as shown below:
docker exec k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1 /opt/kafka_2.12-2.1.0/bin/kafka-topics.sh --list --zookeeper zoo1:2181
OR
docker exec k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1 /opt/kafka_2.12-2.1.0/bin/kafka-topics.sh --list --zookeeper 10.101.3.128:2181
Output of above command:
test
As the topic is available to write data on it, I had executed below command with host machine IP 10.225.36.98 or with service IP 10.105.214.246 :
kubectl exec kafka-broker0-69c97b67f-4pmw9 -c kafka -i -t --
/opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh [ --broker-list
10.225.36.98:30998 --topic test ]
>{"k":"v"}
But none of them is working for me & throw below exception:
[2019-01-01 09:26:52,215] ERROR Error when sending message to topic test with key: null, value: 9 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
>[2019-01-01 09:27:59,513] WARN [Producer clientId=console-producer]
Connection to node -1 (/10.225.36.98:30998) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
When tried to write on broker with hostname kafka:
kubectl exec kafka-broker0-69c97b67f-4pmw9 -c kafka -i -t -- /opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh [ --broker-list kafka:9092 --topic test ]
[2019-01-01 09:34:41,293] WARN Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
(org.apache.kafka.clients.ClientUtils)
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
As the host & service IP were not working, I tried with pod IP, but get test=LEADER_NOT_AVAILABLE error.
kubectl exec kafka-broker0-69c97b67f-4pmw9 -c kafka -i -t -- /opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh [ --broker-list 172.17.0.7:9092 --topic test ]
>{"k":"v"}
[2019-01-01 09:52:30,733] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
After searching Google I found command to get list of available brokers in Zookeeper. So I tried to run it from container & stuck on below error:
bash-4.4# ./opt/zookeeper/bin/zkCli.sh -server zoo1:2181 ls /brokers/ids
Connecting to zoo1:2181
Exception from Zookeeper:
2019-01-01 09:18:05,215 [myid:] - INFO [main:Environment#100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2019-01-01 09:18:05,219 [myid:] - INFO [main:Environment#100] - Client environment:host.name=zookeeper-deployment-1-796f9d9bcc-cr756
2019-01-01 09:18:05,220 [myid:] - INFO [main:Environment#100] - Client environment:java.version=1.8.0_151
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.vendor=Oracle Corporation
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.class.path=/opt/zookeeper/bin/../build/classes:/opt/zookeeper/b
in/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper/bin/../lib/slf4j-api-
1.6.1.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-
1.2.16.jar:/opt/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/zookeeper/bin/../zookeeper-3.4.10.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf:
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.library.path=/usr/lib/jvm/java-1.8-
openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.io.tmpdir=/tmp
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:java.compiler=<NA>
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:os.name=Linux
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:os.arch=amd64
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:os.version=3.10.0-693.11.6.el7.x86_64
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:user.name=root
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:user.home=/root
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:user.dir=/
2019-01-01 09:18:05,225 [myid:] - INFO [main:ZooKeeper#438] - Initiating client connection, connectString=zoo1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher#25f38edc
2019-01-01 09:18:05,259 [myid:] - INFO [main-SendThread(zoo1.default.svc.cluster.local:2181):ClientCnxn$SendThread#1032] - Opening socket connection to server zoo1.default.svc.cluster.local/10.101.3.128:2181. Will not attempt to authenticate using SASL (unknown error)
2019-01-01 09:18:35,280 [myid:] - WARN [main-SendThread(zoo1.default.svc.cluster.local:2181):ClientCnxn$SendThread#1108] - Client session timed out, have not heard from server in 30027ms for sessionid 0x0
2019-01-01 09:18:35,282 [myid:] - INFO [main-SendThread(zoo1.default.svc.cluster.local:2181):ClientCnxn$SendThread#1156] - Client session timed out, have not heard from server in 30027ms for sessionid 0x0, closing socket connection and attempting reconnect
Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /brokers/ids
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1532)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1560)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:731)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:599)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:362)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:290)
I also tried to create Kafka service of type LoadBalancer type, but, No LoadBalancer IP is assigned to service.
References to resolve this issue:
https://rmoff.net/2018/08/02/kafka-listeners-explained/
https://github.com/wurstmeister/kafka-docker/wiki/Connectivity#additional-listener-information
https://github.com/kubernetes/contrib/issues/2891
https://dzone.com/articles/ultimate-guide-to-installing-kafka-docker-on-kuber
https://github.com/wurstmeister/kafka-docker/issues/85
Any help would be appreciated.
Try following command to send data to topic:
docker exec k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1
/opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh
--broker-list kafka-service:30998 --topic test
I am trying to instantiate an installed chaincode using the "Peer Chaincode Instantiate" command (as below). On execution of the command, I am receiving the following error message:
Command to instantiate chaincode:
peer chaincode instantiate -o orderer.proofofownership.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/msp/tlscacerts/tlsca.proofofownership.com-cert.pem -C dmanddis -n CreateDiamond -v 1.0 -c '{"Args":[]}' -P "OR ('DiamondManufacturerMSP.peer','DistributorMSP.peer')"
Error Message received:
Error: Error endorsing chaincode: rpc error: code = Unknown desc = timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:1a96ecc8763e214ee543ecefe214df6025f8e98f2449f2b7877d04655ddadb49)
I tried rectifying this issue by adding the following attributes in "peer-base.yaml file"
- CORE_CHAINCODE_EXECUTETIMEOUT=300s
- CORE_CHAINCODE_DEPLOYTIMEOUT=300s
Although, I am still receiving this particular error.
Following are my docker container configurations:
peer-base.yaml File:
services:
peer-base:
image: hyperledger/fabric-peer:x86_64-1.1.0
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=proof_of_ownership_pow
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
#- CORE_LOGGING_LEVEL=INFO
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_CHAINCODE_EXECUTETIMEOUT=300s
- CORE_CHAINCODE_DEPLOYTIMEOUT=300s
#- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
cli - container configuration in "docker-compose-cli.yaml" file:
cli:
container_name: cli
image: hyperledger/fabric-tools:x86_64-1.1.0
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
#- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.dm.proofofownership.com:7051
- CORE_PEER_LOCALMSPID=DiamondManufacturerMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=host
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
#- ./../chaincode/:/opt/gopath/src/github.com/chaincode
#- ./chaincode/CreateDiamond/go:/opt/gopath/src/github.com/chaincode/
- ./chaincode/CreateDiamond:/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.proofofownership.com
- peer0.dm.proofofownership.com
- peer1.dm.proofofownership.com
- peer0.dist.proofofownership.com
- peer1.dist.proofofownership.com
#network_mode: host
networks:
- pow
peer configuration in "docker-compose-base.yaml" file:
peer0.dm.proofofownership.com:
container_name: peer0.dm.proofofownership.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.dm.proofofownership.com
#- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
#- CORE_PEER_MSPCONFIGPATH=/home/john/Proof-Of-Ownership/crypto-config/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
- CORE_PEER_ADDRESS=peer0.dm.proofofownership.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.dm.proofofownership.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dm.proofofownership.com:7051
- CORE_PEER_LOCALMSPID=DiamondManufacturerMSP
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
#- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
#- CORE_PEER_TLS_ROOTCERT_FILE=/home/john/Proof-Of-Ownership/crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls:/etc/hyperledger/fabric/tls
- peer0.dm.proofofownership.com:/var/hyperledger/production
ports:
- 7051:7051
- 7053:7053
Orderer Configuration in "docker-compose-base.yaml" file:
orderer.proofofownership.com:
container_name: orderer.proofofownership.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
# CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE Newly Added
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=proof_of_ownership_pow
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
#- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
#- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
# New Addition
- CONFIGTX_ORDERER_ORDERERTYPE=solo
- CONFIGTX_ORDERER_BATCHSIZE_MAXMESSAGECOUNT=10
- CONFIGTX_ORDERER_BATCHTIMEOUT=2s
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
#working_dir: /opt/gopath/src/github.com/hyperledger/fabric
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/genesis.block
- ../crypto-config/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/tls/:/var/hyperledger/orderer/tls
- orderer.proofofownership.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
I also reviewed the peer's docker container logs (using docker logs ) and received the following logs:
Launch -> ERRO 3eb launchAndWaitForRegister failed: timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:cc34a20176d7f09e1537b039f3340450e08f6447bf16965324655e72a2a58623)
2018-08-01 12:59:08.739 UTC [endorser] simulateProposal -> ERRO 3ed [dmanddis][cc34a201] failed to invoke chaincode name:"lscc" , error: timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:cc34a20176d7f09e1537b039f3340450e08f6447bf16965324655e72a2a58623)
Following logs were received on installing chaincode:
2018-08-03 09:44:55.822 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-08-03 09:44:55.822 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-08-03 09:44:55.822 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
2018-08-03 09:44:55.822 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
2018-08-03 09:44:55.822 UTC [chaincodeCmd] getChaincodeSpec -> DEBU 005 java chaincode disabled
2018-08-03 09:44:58.270 UTC [golang-platform] getCodeFromFS -> DEBU 006 getCodeFromFS github.com/hyperledger/fabric/peer/chaincode
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 007 Discarding GOROOT package bytes
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 008 Discarding GOROOT package encoding/json
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 009 Discarding GOROOT package fmt
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00a Discarding provided package github.com/hyperledger/fabric/core/chaincode/shim
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00b Discarding provided package github.com/hyperledger/fabric/protos/peer
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00c Discarding GOROOT package strconv
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00d skipping dir: /opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/go
2018-08-03 09:45:02.090 UTC [golang-platform] GetDeploymentPayload -> DEBU 00e done
2018-08-03 09:45:02.090 UTC [container] WriteFileToPackage -> DEBU 00f Writing file to tarball: src/github.com/hyperledger/fabric/peer/chaincode/CreateDiamond.go
2018-08-03 09:45:02.122 UTC [msp/identity] Sign -> DEBU 010 Sign: plaintext: 0AE3070A5B08031A0B089EC890DB0510...EC7BFE1B0000FFFFEE433C37001C0000
2018-08-03 09:45:02.122 UTC [msp/identity] Sign -> DEBU 011 Sign: digest: E5160DE95DB096379967D959FA71E692F098983F443378600943EA5D7265A82C
2018-08-03 09:45:02.230 UTC [chaincodeCmd] install -> DEBU 012 Installed remotely response:<status:200 payload:"OK" >
2018-08-03 09:45:02.230 UTC [main] main -> INFO 013 Exiting.....
In the peer configuration, you specified a different port for the chaincode endpoint than the peer adress (chaincode endpoint port 7052, peer adress on port 7051):
CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
But this port is not exposed. Please add this to your peer port configuration:
- 7052:7052
It is likely that your chaincode is failing on start-up. You might want to try using the development mode tutorial approach to debug your chaincode. It is possible that the chaincode process is failing. By executing from within the container, you can view the logs to see what might not be working for you.
The devmode tutorial is here . You will simply need to replace the tutorial's chaincode with your own.
i'm trying to use jetty's serverpush-feature with haproxy. I've set up Jetty 9.4.7 with PushCacheFilter and haproxy in two docker-containers.
I think jetty tries to push something, but no PUSH_PROMISE-frames are delivered to the client (I've checked chrome's net-internals-tab).
I'm not sure if this is an issue with jetty (maybe with h2c)!
Here's my haporxy-config (taken from jetty's documentation):
global
tune.ssl.default-dh-param 1024
defaults
timeout connect 10000ms
timeout client 60000ms
timeout server 60000ms
frontend fe_http
mode http
bind *:80
# Redirect to https
redirect scheme https code 301
frontend fe_https
mode tcp
bind *:443 ssl no-sslv3 crt /usr/local/etc/domain.pem ciphers TLSv1.2
alpn h2,http/1.1
default_backend be_http
backend be_http
mode tcp
server domain basexhttp:8984
and here's how jetty starts:
[main] INFO org.eclipse.jetty.util.log - Logging initialized #377ms to org.eclipse.jetty.util.log.Slf4jLog
BaseX 9.0 beta 5cc42ae [HTTP Server]
[main] INFO org.eclipse.jetty.server.Server - jetty-9.4.7.v20170914
[main] INFO org.eclipse.jetty.webapp.StandardDescriptorProcessor - NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
[main] INFO org.eclipse.jetty.server.session - DefaultSessionIdManager workerName=node0
[main] INFO org.eclipse.jetty.server.session - No SessionScavenger set, using defaults
[main] INFO org.eclipse.jetty.server.session - Scavenging every 600000ms
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#7dc222ae{/,file:///opt/basex/webapp/,AVAILABLE}{/opt/basex/webapp}
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#3439f68d{h2c,[h2c, http/1.1]}{0.0.0.0:8984}
[main] INFO org.eclipse.jetty.server.Server - Started #784ms
HTTP Server was started (port: 8984).
HTTP Stop Server was started (port: 8985).
And here's the simple docker-compose.yml
version: '3.3'
services:
basexhttp:
container_name: pushcachefilter-basexhttp
build: pushcachefilter-basexhttp/
image: "pushcachefilter/basexhttp"
volumes:
- "${HOME}/data:/opt/basex/data"
- "${HOME}/base/app-web/webapp:/opt/basex/webapp"
networks:
- web
haproxy:
container_name: haproxy_container
build: ha-proxy/
image: "my_haproxy"
depends_on:
- basexhttp
ports:
- 80:80
- 443:443
networks:
- web
networks:
web:
driver: overlay
Please note that i have to configure jetty using my own jetty.xml - the way shown in the documentation is not working for me.
Thx in advance
Bodo