Using Spark Streaming with Kafka docker container errors? - docker

I am using Kafka docker-compose setting with below docker-compose.yml installed in VM-Ware machine.
When I connect to it by pyspark.streaming.kafka.KafkaUtils, it released some errors.
Please help me resolve this problems.
I used configuration from https://rmoff.net/2018/08/02/kafka-listeners-explained/
docker-compose.yml file
version: '3.7'
services:
zookeeper:
image: "confluentinc/cp-zookeeper:latest"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
# This has three listeners you can experiment with.
# BOB for internal traffic on the Docker network
# FRED for traffic from the Docker-host machine (`localhost`)
# ALICE for traffic from outside, reaching the Docker host on the 192.168.231.145
# Use
kafka0:
image: "confluentinc/cp-enterprise-kafka:latest"
ports:
- '9092:9092'
- '29094:29094'
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 0
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://kafka0:9092,LISTENER_ALICE://0.0.0.0:29094
KAFKA_ADVERTISED_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://localhost:9092,LISTENER_ALICE://192.168.231.145:29094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_BOB:PLAINTEXT,LISTENER_FRED:PLAINTEXT,LISTENER_ALICE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_BOB
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
kafkacat:
image: confluentinc/cp-kafkacat
command: sleep infinity
python code i used to connect from vm-ware hosted machine
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
from confluent_kafka import Producer, Consumer
import socket
import json
if __name__ == "__main__":
sc = SparkContext(appName="Processing_raw_data")
ssc = StreamingContext(sc, 1)
in_stream = KafkaUtils.createStream(ssc, "192.168.231.145:2181", socket.gethostname(), {"testing": 1}, {"auto.offset.reset": "smallest"})
in_stream.pprint()
ssc.start()
ssc.awaitTermination()
Errors
21/06/19 19:44:24 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
21/06/19 19:44:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
-------------------------------------------
Time: 2021-06-19 19:44:27
-------------------------------------------
21/06/19 19:44:27 WARN AppInfo$: Can't read Kafka version from MANIFEST.MF. Possible cause: java.lang.NullPointerException
[Stage 0:> (0 + 1) / 1]-------------------------------------------
Time: 2021-06-19 19:44:28
-------------------------------------------
21/06/19 19:44:28 WARN ClientUtils$: Fetching topic metadata with correlation id 0 for topics [Set(testing)] from broker [id:0,host:kafka0,port:29092] failed
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
21/06/19 19:44:28 WARN ConsumerFetcherManager$LeaderFinderThread: [dino-computer_dino-computer-1624106667579-fbe9ab2d-leader-finder-thread], Failed to find leader for Set([testing,0])
kafka.common.KafkaException: fetching topic metadata for topics [Set(testing)] from broker [ArrayBuffer(id:0,host:kafka0,port:29092)] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
... 3 more

Related

Quarkus Kafka Streams App unable to use SASL PLAIN mechanism: Unexpected handshake request with client mechanism PLAIN, enabled mechanisms are []

I've been developing a Kafka stream processing application with the Quarkus-Framework in Java. Now I'm trying to connect to the Kafka brokers via the SASL/PLAIN mechanism, but am getting the following error:
2022-10-27 10:52:06,736 ERROR [org.apa.kaf.cli.NetworkClient] (kafka-admin-client-thread | alarms-preprocessor-dev-a8147d3e-809c-4e96-9ce0-de10e55a8d72-admin) [AdminClient clientId=alarms-preprocessor-dev-a8147d3e-809c-4e96-9ce0-de10e55a8d72-admin] Connection to node -1 (localhost/127.0.0.1:29092) failed authentication due to: Unexpected handshake request with client mechanism PLAIN, enabled mechanisms are []
Apparently, the brokers do not have the PLAIN mechanism enabled, which begs the question why my Kafka-Connect-Service is able to sue the PLAIN-mechanism.
Anyway, this is my broker-configuration (approximately the same for all 3 instances) using confluentinc/cp-kafka docker image with docker-compose:
broker-1:
image: confluentinc/cp-kafka:7.2.1
hostname: broker-1
container_name: broker-1
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
- "9091:9091"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-1:9092,SASL_PLAINTEXT://broker-1:9091,PLAINTEXT_HOST://localhost:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafka_jaas.conf"
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
volumes:
- /home/larissa/Projekte/SRE/Kafka/local_dev_cluster/files/kafka_jaas.conf:/etc/kafka/kafka_jaas.conf
and this is part of the output from docker logs broker-1 | grep PLAIN:
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
advertised.listeners = PLAINTEXT://broker-1:9092,SASL_PLAINTEXT://broker-1:9091,PLAINTEXT_HOST://localhost:29092
inter.broker.listener.name = PLAINTEXT
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
listeners = PLAINTEXT://0.0.0.0:9092,SASL_PLAINTEXT://0.0.0.0:9091,PLAINTEXT_HOST://0.0.0.0:29092
sasl.enabled.mechanisms = [PLAIN]
security.inter.broker.protocol = PLAINTEXT
The part that says "sasl.enabled.mechanisms = [PLAIN]" suggests that the PLAIN mechanism is indeed enabled. So maybe it's a problem with my Quarkus application configuration, which looks like this:
quarkus.kafka-streams.application-id=alarms-preprocessor-dev
quarkus.kafka-streams.bootstrap-servers=localhost:29092,localhost:29192,localhost:29292
quarkus.kafka-streams.topics=test_topic
quarkus.kafka-streams.security.protocol=SASL_PLAINTEXT
quarkus.kafka-streams.sasl.mechanism=PLAIN
quarkus.kafka-streams.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="admin-secret" \
serviceName="alarms-preprocessor";
All JAAS-configs, users and passwords are correct by the way, since they work with Kafka Connect just fine. If necessary, I can provide those too, just let me know.
Thanks in advance for any answers :)

Connection to Kafka broker in Docker container fails [duplicate]

This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 1 year ago.
I'm trying to get a simple Debezium stack (with Docker Compose) running but the connection to the Kafka broker fails.
Here is my simplified docker-compose.yml:
version: "3"
services:
zookeeper:
image: debezium/zookeeper:1.6
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
kafka:
image: debezium/kafka:1.6
links:
- zookeeper
ports:
- "9092:9092"
environment:
BROKER_ID: 1
KAFKA_LISTENERS: LISTENER_BOB://kafka:29092,LISTENER_FRED://localhost:9092
KAFKA_ADVERTISED_LISTENERS: LISTENER_BOB://kafka:29092,LISTENER_FRED://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_BOB:PLAINTEXT,LISTENER_FRED:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_BOB
ZOOKEEPER_CONNECT: zookeeper:2181
As you may notice I added the listeners according to Kafka Listeners - Explained. But when I use kafkacat -b localhost:9092 -L to retrieve the broker's metadata the following error appears.
%6|1627623916.693|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration (connecting to a SSL listener?) or broker version is < 0.10 (see api.version.request) (after 2ms in state APIVERSION_QUERY)
%6|1627623916.961|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration (connecting to a SSL listener?) or broker version is < 0.10 (see api.version.request) (after 3ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
% ERROR: Failed to acquire metadata: Local: Broker transport failure
Kafka is even not able to start when I configure Kafka as Debezium suggests here.
2021-07-30 05:48:38,591 - WARN [Controller-1-to-broker-1-send-thread:NetworkClient#780] - [Controller id=1, targetBrokerId=1] Connection to node 1 (/localhost:9092) could not be established. Broker may not be available.
What am I doing wrong?
Thank you for any help in advance!
You've set a listener to only be local, not external.
Change to
KAFKA_LISTENERS: LISTENER_BOB://kafka:29092,LISTENER_FRED://0.0.0.0:9092

rsyslog not connecting to elasticsearch in docker

I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.

Prometheus (in Docker container) Cannot Scrape Target on Host

Prometheus running inside a docker container (version 18.09.2, build 6247962, docker-compose.xml below) and the scrape target is on localhost:8000 which is created by a Python 3 script.
Error obtained for the failed scrape target (localhost:9090/targets) is
Get http://127.0.0.1:8000/metrics: dial tcp 127.0.0.1:8000: getsockopt: connection refused
Question: Why is Prometheus in the docker container unable to scrape the target which is running on the host computer (Mac OS X)? How can we get Prometheus running in docker container able to scrape the target running on the host?
Failed attempt: Tried replacing in docker-compose.yml
networks:
- back-tier
- front-tier
with
network_mode: "host"
but then we are unable to access the Prometheus admin page at localhost:9090.
Unable to find solution from similar questions
Getting error "Get http://localhost:9443/metrics: dial tcp 127.0.0.1:9443: connect: connection refused"
docker-compose.yml
version: '3.3'
networks:
front-tier:
back-tier:
services:
prometheus:
image: prom/prometheus:v2.1.0
volumes:
- ./prometheus/prometheus:/etc/prometheus/
- ./prometheus/prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- 9090:9090
networks:
- back-tier
restart: always
grafana:
image: grafana/grafana
user: "104"
depends_on:
- prometheus
ports:
- 3000:3000
volumes:
- ./grafana/grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
env_file:
- ./grafana/config.monitoring
networks:
- back-tier
- front-tier
restart: always
prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: 'my-project'
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'rigs-portal'
scrape_interval: 5s
static_configs:
- targets: ['127.0.0.1:8000']
Output at http://localhost:8000/metrics
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 65.0
python_gc_objects_collected_total{generation="1"} 281.0
python_gc_objects_collected_total{generation="2"} 0.0
# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 37.0
python_gc_collections_total{generation="1"} 3.0
python_gc_collections_total{generation="2"} 0.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="7",patchlevel="3",version="3.7.3"} 1.0
# HELP request_processing_seconds Time spend processing request
# TYPE request_processing_seconds summary
request_processing_seconds_count 2545.0
request_processing_seconds_sum 1290.4869346540017
# TYPE request_processing_seconds_created gauge
request_processing_seconds_created 1.562364777766845e+09
# HELP my_inprorgress_requests CPU Load
# TYPE my_inprorgress_requests gauge
my_inprorgress_requests 65.0
Python3 script
from prometheus_client import start_http_server, Summary, Gauge
import random
import time
# Create a metric to track time spent and requests made
REQUEST_TIME = Summary("request_processing_seconds", 'Time spend processing request')
#REQUEST_TIME.time()
def process_request(t):
time.sleep(t)
if __name__ == "__main__":
start_http_server(8000)
g = Gauge('my_inprorgress_requests', 'CPU Load')
g.set(65)
while True:
process_request(random.random())
While not a very common use case.. you can indeed connect from your container to your host.
From https://docs.docker.com/docker-for-mac/networking/
I want to connect from a container to a service on the host
The host has a changing IP address (or none if you have no network
access). From 18.03 onwards our recommendation is to connect to the
special DNS name host.docker.internal, which resolves to the internal
IP address used by the host. This is for development purpose and will
not work in a production environment outside of Docker Desktop for
Mac.
For reference for people who might find this question through search, this is supported now as of Docker 20.10 and above. See the following link:
How to access host port from docker container
and:
https://github.com/docker/for-linux/issues/264#issuecomment-823528103
Below is an example of running Prometheus on Docker for macOS which causes Prometheus to scrape a simple Spring Boot application running on localhost:8080:
Bash
docker run --rm --name prometheus -p 9090:9090 -v /Users/YourName/conf/prometheus.yml:/etc/prometheus/prometheus.yml -d prom/prometheus
/Users/YourName/conf/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'spring-boot'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:8080']
In this case it is the use of the special domain host.docker.internal instead of localhost that causes the host to be resolved from the container on a macOS as the config file is mapped into the Prometheus container.
Environment
Macbook Pro, Apple M1 Pro
Docker version 20.10.17, build 100c701
Prometheus 2.38

Hyperledger Fabric - Unable to Start Network with Four Kafka and Three Zookeeper Ensemble

I'm trying to setup a network of 2 organizations each having two peers. A 3rd organisation having 2 orderer nodes with kakfa-zookeeper ensemble with 4 kafka and 3 zookeeper nodes.
Below is the relevant part of my crypto-config.yaml file:
OrdererOrgs:
- Name: Orderer
Domain: ordererOrg.example.com
Template:
Count: 2
Below is the relevant part of my configtx.yaml file:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/ordererOrg.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
.................
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer0.ordererOrg.example.com:7050
- orderer1.ordererOrg.example.com:7040
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka0.ordererOrg.example.com:9092
- kafka1.ordererOrg.example.com:9092
- kafka2.ordererOrg.example.com:9092
- kafka3.ordererOrg.example.com:9092
...............
Below is the relevant part of my Docker base file:
zookeeper:
image: hyperledger/fabric-zookeeper
environment:
- ZOO_SERVERS=server.1=zookeeper0.ordererOrg.example.com:2888:3888 server.2=zookeeper1.ordererOrg.example.com:2888:3888 server.3=zookeeper2.ordererOrg.example.com:2888:3888
restart: always
kafka:
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.ordererOrg.example.com:2181,zookeeper1.ordererOrg.example.com:2181,zookeeper2.ordererOrg.example.com:2181
Below is the relevant part of my Docker Compose file:
zookeeper0.ordererOrg. example.com:
container_name: zookeeper0.ordererOrg.example.com
extends:
file: base/kafka-base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=1
ports:
- '2181:2181'
- '2888:2888'
- '3888:3888'
networks:
- byfn
kafka0.ordererOrg.example.com:
container_name: kafka0.ordererOrgvodworks.example.com
extends:
file: base/kafka-base.yaml
service: kafka
depends_on:
- zookeeper0.ordererOrg.example.com
- zookeeper1.ordererOrg.example.com
- zookeeper2.ordererOrg.example.com
environment:
- KAFKA_BROKER_ID=0
ports:
- '9092:9092'
- '9093:9093'
networks:
- byfn
-----------------------
Note: The same structure is being followed for:
- zookeeper1.ordererOrg. example.com
- zookeeper2.ordererOrg. example.com
And
- kafka1.ordererOrg.example.com
- kafka2.ordererOrg.example.com
- kafka3.ordererOrg.example.com
When I run the network start command I get the following error messages:
✖ Starting business network definition. This may take a minute...
Error: Error trying to start business network. Error: No valid
responses from any peers. Response from attempted peer comms was an
error: Error: REQUEST_TIMEOUT
And when I run the same network start command again, I get the following:
✖ Starting business network definition. This may take a minute...
Error: Error trying to start business network. Error: No valid
responses from any peers. Response from attempted peer comms was an
error: Error: chaincode registration failed: timeout expired while
starting chaincode tt_poc:0.0.1 for transaction
And images files are also not being created against the chaincode (BNA file) as you can see the ccenv containers and orderer logs in the image below:
And I get the following logs as well on console after peer channel create command, though channel gets created successfully:
2019-03-25 15:20:34.567 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and rderer connections initialized
2019-03-25 15:20:34.956 UTC [cli.common] readBlock -> INFO 002 Got status: &{SERVICE_UNAVAILABLE}
I tried to provide maximum information but still please let me know if you require logs of any other container as well. Thanks for your time.
I finally able to resolve this issue. There was nothing wrong with these YAML configurations. The issue was with the docker configurations that It was lacking in resources and the strange thing is that I didn't get any resources related error in any container logs file. So, I just increased CPUs and Memory settings in the docker advanced configurations like below:
And after these configurational changes, my network started successfully and working properly.
Thanks to my colleague #Rafiq who help me in sorting out this issue.

Resources