I have some organizations with more than 2 peers. When I was editing the docker-compose-base.yaml, I am not sure how to define CORE_PEER_GOSSIP_BOOTSTRAP. Below is what I did, but the log showed that the peer fails to connect to the gossip peers. What is the correct way to do so? Thank you in advance!
docker-compose-base.yaml
peer0.caseManager.snts.com:
container_name: peer0.caseManager.snts.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.caseManager.snts.com
- CORE_PEER_ADDRESS=peer0.caseManager.snts.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=[peer1.caseManager.snts.com:7051 peer2.caseManager.snts.com:7051]
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.caseManager.snts.com:7051
- CORE_PEER_LOCALMSPID=CaseManagerMSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/caseManager.snts.com/peers/peer0.caseManager.snts.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/caseManager.snts.com/peers/peer0.caseManager.snts.com/tls:/etc/hyperledger/fabric/tls
- peer0.caseManager.snts.com:/var/hyperledger/production
ports:
- 9051:7051
- 9053:7053
log of "docker-compose -p docker-compose.yaml up"
peer0.caseManager.snts.com | 2018-11-15 16:21:18.420 UTC [gossip/discovery] func1 -> WARN 023 Could not connect to {peer2.caseManager.snts.com:7051] [] [] peer2.caseManager.snts.com:7051] <nil> <nil>} : context deadline exceeded
peer0.caseManager.snts.com | 2018-11-15 16:21:18.420 UTC [gossip/discovery] func1 -> WARN 024 Could not connect to {[peer1.caseManager.snts.com:7051 [] [] [peer1.caseManager.snts.com:7051 <nil> <nil>} : context deadline exceeded
From a peer's perspective, the Bootstrap peer is another peer from the same Organization, who it can reach out to during bootstrap and get some necessary info to get communication going. (see here)
Your setup looks correct, and its perfectly plausible that your Peer0 started up earlier than Peer1 and Peer2 and was unable to find these during startup, but that's not out of ordinary. Did you end up having any error? If not, this looks like normal operation.
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.caseManager.snts.com:7051 peer2.caseManager.snts.com:7051
Related
I need help! (who would have thought, right? lol)
I have a job interview in few days and it would mean the world to me to be well prepared for it and have some working examples.
I am trying to set up an ELK pipeline to stream data from kafka, through logstash, elasticsearch and finally read it from Kibana. The usual.
I am making use of containers, but the duo logstash - elasticsearch are giving me an aneurism.
Everything else works perfectly fine. I've checked the logs off of kafka and that is working just fine. Kibana is collected to elasticsearch just fine as well. But logstash and es really don't want to match.
Here is the setup
docker-compose.yml
version: '3.6'
services:
elasticsearch:
image: elasticsearch:8.6.0
container_name: elasticsearch
#restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
cluster.name: elf-kafka-cluster
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
kibana:
image: kibana:8.6.0
container_name: kibana
#restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
networks:
- elk
logstash:
image: logstash:8.6.0
container_name: logstash
#restart: always
volumes:
- type: bind
source: ./logstash_pipeline/
target: /usr/share/logstash/pipeline
read_only: true
command: logstash -f /home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf
depends_on:
- elasticsearch
ports:
- '9600:9600'
environment:
xpack.monitoring.enabled: true
# LS_JAVA_OPTS: "-Xmx256m -Xms256m"
links:
- elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
driver: bridge
logstash.conf
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["topic"]
}
}
output {
elasitcsearch {
hosts => ["http://localhost:9200"]
index => "topic"
workers => 1
}
}
These are logstash error logs when I compose up:
logstash | [2023-01-17T13:59:02,680][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash | [2023-01-17T13:59:04,711][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash | [2023-01-17T13:59:05,373][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,379][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,436][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,444][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash | [2023-01-17T13:59:05,449][WARN ][logstash.licensechecker.licensereader] Attempt to validate Elasticsearch license failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,477][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash | [2023-01-17T13:59:05,567][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash | [2023-01-17T13:59:05,661][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf"}
logstash | [2023-01-17T13:59:05,664][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
logstash | [2023-01-17T13:59:06,333][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash | [2023-01-17T13:59:06,411][INFO ][logstash.runner ] Logstash shut down.
logstash | [2023-01-17T13:59:06,419][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
logstash | org.jruby.exceptions.SystemExit: (SystemExit) exit
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:790) ~[jruby.jar:?]
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:753) ~[jruby.jar:?]
logstash | at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:91) ~[?:?]
and this is to prove that everything is working as intended with es (or so it seems)
netstat -an | grep 9200
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN
tcp6 0 0 :::9200 :::* LISTEN
unix 3 [ ] STREAM CONNECTED 49200
I've looked through everything and this is 100% not a duplicate because I have tried it all. I really can't figure it out. Hope anyone can help.
Thank you for you time.
You should set logstash.yml
Create a logstash.yml with values below:
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://localhost:9200" ]
In your docker-compose.yml, add another volume in Logstash container as shown below:
./logstash.yml:/usr/share/logstash/config/logstash.yml
Additionally, its good to run with restart condition.
SOLVED: and I really can not explain what was wrong....it suddenly works with the same configuration, maybe the connection was unstable i really can't say...just happy that my headaches are gone over this issue :)
So, I have this problem with deploying kafka connect on a external machine.
i get this error:
connect | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
connect | [main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
connect | [main] ERROR io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Brokers found [].
I have used a lot of time in figuring out the issue, and did find some useful things like advertised.listeners and how they work, so i have added INTERNAL and EXTERNAL listeners now i know it is a connection problem somewhere but cannot figure out where.
I have tried using kafkacat on the external machine(tried both windows and ubuntu)
kafkacat -b something.b:9092 -L
and i do get a response with list of brokers, topics etc.
some of the output:
Metadata for all topics (from broker 3: something.a:9092/3):
3 brokers:
broker 2 at something.b:9092 (controller)
broker 3 at something.c:9092
broker 1 at something.a:9092
and it of course give same output with all of the 3 brokers
but when i try to spin up kafka connect i get the above mentioned error.
I am really out of ideas..... here is docker-compose code for my connect.
connect:
image: confluentinc/cp-kafka-connect-base:7.0.0
hostname: connect
container_name: connect
ports:
- 8083:8083
environment:
CONNECT_BOOTSTRAP_SERVERS: 'something.b:9092'
CONNECT_REST_PORT: 8083
CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
CONNECT_GROUP_ID: group
CONNECT_CONFIG_STORAGE_TOPIC: connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.converters.ByteArrayConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.converters.ByteArrayConverter"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_PLUGIN_PATH: '/usr/share/java'
I am using kafka version 2.4, maybe an issue there? or what am i missing.....
I've set up a Zookeeper-ensemble (version 3.4.9) with 3 instances. This works like a charm on the test-system, but doesn't come up on the live-system at all. The error message is the following:
2020-08-28 06:26:24,643 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager#400] - Cannot open channel to 2 at election address /10.3.1.173:3888
java.net.NoRouteToHostException: Host is unreachable (Host unreachable)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433)
at java.lang.Thread.run(Thread.java:745)
I've searched on here and in other places, but the only accepted solution to the problem is to set each node's server address to 0.0.0.0, which doesn't work here. My setup is fully dockerized and applied with ansible, so it might look a bit different from what people normally seem to do. But the connection string e.g. for server.1 is this:
"server.1=0.0.0.0:2888:3888 server.2=10.3.1.173:2888:3888 server.3=10.3.1.175:2888:3888"
which is also applied to the zookeepers internal configuration, as the logs show (again for server.1):
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
2020-08-28 06:26:23,549 [myid:] - INFO [main:QuorumPeerConfig#124] - Reading configuration from: /conf/zoo.cfg
2020-08-28 06:26:23,559 [myid:] - INFO [main:QuorumPeer$QuorumServer#149] - Resolved hostname: 10.3.1.175 to address: /10.3.1.175
2020-08-28 06:26:23,559 [myid:] - INFO [main:QuorumPeer$QuorumServer#149] - Resolved hostname: 10.3.1.173 to address: /10.3.1.173
2020-08-28 06:26:23,560 [myid:] - INFO [main:QuorumPeer$QuorumServer#149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0
2020-08-28 06:26:23,560 [myid:] - INFO [main:QuorumPeerConfig#352] - Defaulting to majority quorums
(...)
2020-08-28 06:26:23,570 [myid:1] - INFO [main:QuorumPeerMain#127] - Starting quorum peer
2020-08-28 06:26:23,577 [myid:1] - INFO [main:Login#294] - successfully logged in.
2020-08-28 06:26:23,579 [myid:1] - INFO [main:NIOServerCnxnFactory#89] - binding to port 0.0.0.0/0.0.0.0:2181
This is applied to all 3 instance of zookeeper, but none of them can talk to another.
Additional information:
Apart from IP-addresses for the servers, the configuration is identical to the test-system. The Ansible Docker module is configured the same, the JAAS-Config (with DigestLoginModule) is the same, and the environment variables inside of all docker containers are the same, too.
Each server inside the live system can ping the other servers. I can also ping these servers from inside each Zookeeper container. In addition, I can curl each Zookeeper container on the JMX-port from inside any other container of the live-system. So they definitely can connect over the network.
Please help, thanks :D
Edit: #Stefano was asking how I start the docker containers, so I'll try to provide some insight. As mentioned, it's an Ansible setup in a task using the "docker_container" plugin which is used in a playbook to install the 3 instances across machines:
---
- name: Install Zookeeper
docker_container:
name: zookeeper
image: zookeeper:3.4.9
state: started
ports:
- "2181:2181" # Zookeeper Port
- "2888:2888"
- "3888:3888" # Election ports
- "9998:8080" # JMX metrics
env:
ZOO_MY_ID: "{{ ID }}" #this is 1 for server.1, etc.
ZOO_PORT: "2181"
ZOO_SERVERS: "{{ ZOO_SERVERS }}" #provided in host-vars
SERVER_JVMFLAGS: "-Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf -javaagent:/opt/jmx-exporter/jmx_prometheus_javaagent-0.12.0.jar=8080:/opt/jmx-exporter/zookeeper.yml"
volumes:
- /home/ansible/volumes/zoo1/data:/data
- /home/ansible/volumes/zoo1/datalog:/datalog
- /home/ansible/jmx-exporter:/opt/jmx-exporter
- /home/ansible/zookeeper_jaas.conf:/etc/kafka/zookeeper_jaas.conf
The ZOO_SERVERS are taken from the hosts file:
all:
(...)
children:
zookeeper:
hosts:
zoo1:
ID: "1"
ZOO_SERVERS: "server.1=0.0.0.0:2888:3888 server.2=10.3.1.173:2888:3888 server.3=10.3.1.175:2888:3888"
ansible_host: 10.3.1.171
zoo2:
ID: "2"
ZOO_SERVERS: "server.1=10.3.1.171:2888:3888 server.2=0.0.0.0:2888:3888 server.3=10.3.1.175:2888:3888"
ansible_host: 10.3.1.173
zoo3:
ID: "3"
ZOO_SERVERS: "server.1=10.3.1.171:2888:3888 server.2=10.3.1.173:2888:3888 server.3=0.0.0.0:2888:3888"
ansible_host: 10.3.1.175
So when I read back what I commented above, I noticed that I am not actually using the "confluentinc/cp-zookeeper" docker image, but the "zookeeper" docker image.
Once I changed from "zookeeper:3.4.9" to "confluentinc/cp-zookeeper:5.4.0" and adjusted the ZOO_PORT env-var's name to ZOOKEEPER_CLIENT_PORT, it somehow worked.
This doesn't answer the "why" but maybe this workaround helps someone else. I'll mark this as the accepted answer for now, but please feel free to provide additional insight.
I have been working on Hyperledger fabric 2.0 Multi-Org Networking running under default ports. The setup is as follows:
Org1 ( Peer0:7051, Peer1:8051, CA: 7054 ,couchdb0:5984, couchdb1:6984:5984)
Org2 ( Peer0:9051, Peer1:10051, CA: 8054,couchdb2:7984:5984, couchdb3:8984:5984)
Orderer (0rderer1:7050, Orderer2:8050, Orderer3: 9050) RAFT Mechanism
The requirement is to redefine all the container ports mentioned above so that I can run the same Fabric application as two environments ( One for Testing(Stable version) and one for Development )
I tried to change the ports (Specifying environmental variables for ports in docker-compose) of Peers, orderers, CA. But I don't have any option for the CouchDB which always has the default port(5984)
Is there any way to achieve this? so that it will also be helpful in running two different fabric applications in the same virtual machine
EDIT1:
My docker-compose.yaml file (I have only mentioned for- Org1(Peer0,peer1), Orderer1,ca-org1, couchdb0,couchdb1)
version: "2"
networks:
test2:
services:
ca-org1:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.org1.test.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/priv_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-tls/tlsca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-tls/priv_sk
ports:
- "3054:3054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/ca/:/etc/hyperledger/fabric-ca-server-config
- ./channel/crypto-config/peerOrganizations/org1.test.com/tlsca/:/etc/hyperledger/fabric-ca-server-tls
container_name: ca.org1.test.com
hostname: ca.org1.test.com
networks:
- test2
orderer.test.com:
container_name: orderer.test.com
image: hyperledger/fabric-orderer:2.1
dns_search: .
environment:
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_METRICS_PROVIDER=prometheus
- ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:3443
- ORDERER_GENERAL_LISTENPORT=3050
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 3050:3050
- 3443:3443
networks:
- test2
volumes:
- ./channel/genesis.block:/var/hyperledger/orderer/genesis.block
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/msp:/var/hyperledger/orderer/msp
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/tls:/var/hyperledger/orderer/tls
couchdb0:
container_name: couchdb0-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 1984:1984
networks:
- test2
couchdb1:
container_name: couchdb1-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 2984:1984
networks:
- test2
peer0.org1.test.com:
container_name: peer0.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer0.org1.test.com
- CORE_PEER_ADDRESS=peer0.org1.test.com:3051
- CORE_PEER_LISTENADDRESS=0.0.0.0:3051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.test.com:3052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:3052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.test.com:3051
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
depends_on:
- couchdb0
ports:
- 3051:3051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
peer1.org1.test.com:
container_name: peer1.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=debug
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer1.org1.test.com
- CORE_PEER_ADDRESS=peer1.org1.test.com:4051
- CORE_PEER_LISTENADDRESS=0.0.0.0:4051
- CORE_PEER_CHAINCODEADDRESS=peer1.org1.test.com:4052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:4052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.test.com:3051
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
ports:
- 4051:4051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
Thanks for the suggestions regarding couchDB. I had a thought that we should only specify the default couchDB port each instance. Anyway I missed the step of changing the container name in the first place (default peer0.org1.example.com to peer0.org1.test.com) I was able to start the docker containers with new container names so that it doesn't stop(recreate) the existing containers which is already running on the actual ports.
The issue which I am facing now is peer is not able to communicate with the couchdb-test url
U 04c Entering VerifyCouchConfig()
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04d Entering handleRequest() method=GET url=http://couchdb1-test:1984/ dbName=
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04e Request URL: http://couchdb1-test:1984/
2020-08-12 11:22:45.011 UTC [couchdb] handleRequest -> WARN 04f Retrying couchdb request in 125ms. Attempt:1 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.137 UTC [couchdb] handleRequest -> WARN 050 Retrying couchdb request in 250ms. Attempt:2 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.389 UTC [couchdb] handleRequest -> WARN 051 Retrying couchdb request in 500ms. Attempt:3 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
Hence if I try to create a channel, peer container exits even though it was running till now and it's not able to join the channel
2020-08-12 10:58:29.264 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:29.301 UTC [cli.common] readBlock -> INFO 002 Expect block, but got status: &{NOT_FOUND}
2020-08-12 10:58:29.305 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2020-08-12 10:58:29.506 UTC [cli.common] readBlock -> INFO 004 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.509 UTC [channelCmd] InitCmdFactory -> INFO 005 Endorser and orderer connections initialized
2020-08-12 10:58:29.710 UTC [cli.common] readBlock -> INFO 006 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.713 UTC [channelCmd] InitCmdFactory -> INFO 007 Endorser and orderer connections initialized
2020-08-12 10:58:29.916 UTC [cli.common] readBlock -> INFO 008 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.922 UTC [channelCmd] InitCmdFactory -> INFO 009 Endorser and orderer connections initialized
2020-08-12 10:58:30.123 UTC [cli.common] readBlock -> INFO 00a Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.126 UTC [channelCmd] InitCmdFactory -> INFO 00b Endorser and orderer connections initialized
2020-08-12 10:58:30.327 UTC [cli.common] readBlock -> INFO 00c Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.331 UTC [channelCmd] InitCmdFactory -> INFO 00d Endorser and orderer connections initialized
2020-08-12 10:58:30.534 UTC [cli.common] readBlock -> INFO 00e Received block: 0
Error: error getting endorser client for channel: endorser client failed to connect to localhost:3051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:53668->127.0.0.1:3051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:4051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:60724->127.0.0.1:4051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:5051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:57948->127.0.0.1:5051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:6051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:58976->127.0.0.1:6051: read: connection reset by peer"
2020-08-12 10:58:37.518 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.552 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
2020-08-12 10:58:37.685 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.763 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
Here, only the Orderers are successfully added to the channel but not the peers even after changing the ports.
This isnt an issue, you can just specify it as you did for the others like this. Are you facing some specific issues while mapping the ports
ports:
- 6984:5984 # Mapping Host Port to Container Port
You can change the couchDb port from the docker-compose file.
Showing a snippet from docekr-compose.yaml file.
couchdb0:
container_name: couchdb0
image: couchdb:2.3
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
# Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
# for example map it to utilize Fauxton User Interface in dev environments.
ports:
- "5984:5984"
networks:
- byfn
From here you can change ports easily.
I'm trying to setup a network of 2 organizations each having two peers. A 3rd organisation having 2 orderer nodes with kakfa-zookeeper ensemble with 4 kafka and 3 zookeeper nodes.
Below is the relevant part of my crypto-config.yaml file:
OrdererOrgs:
- Name: Orderer
Domain: ordererOrg.example.com
Template:
Count: 2
Below is the relevant part of my configtx.yaml file:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/ordererOrg.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
.................
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer0.ordererOrg.example.com:7050
- orderer1.ordererOrg.example.com:7040
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka0.ordererOrg.example.com:9092
- kafka1.ordererOrg.example.com:9092
- kafka2.ordererOrg.example.com:9092
- kafka3.ordererOrg.example.com:9092
...............
Below is the relevant part of my Docker base file:
zookeeper:
image: hyperledger/fabric-zookeeper
environment:
- ZOO_SERVERS=server.1=zookeeper0.ordererOrg.example.com:2888:3888 server.2=zookeeper1.ordererOrg.example.com:2888:3888 server.3=zookeeper2.ordererOrg.example.com:2888:3888
restart: always
kafka:
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.ordererOrg.example.com:2181,zookeeper1.ordererOrg.example.com:2181,zookeeper2.ordererOrg.example.com:2181
Below is the relevant part of my Docker Compose file:
zookeeper0.ordererOrg. example.com:
container_name: zookeeper0.ordererOrg.example.com
extends:
file: base/kafka-base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=1
ports:
- '2181:2181'
- '2888:2888'
- '3888:3888'
networks:
- byfn
kafka0.ordererOrg.example.com:
container_name: kafka0.ordererOrgvodworks.example.com
extends:
file: base/kafka-base.yaml
service: kafka
depends_on:
- zookeeper0.ordererOrg.example.com
- zookeeper1.ordererOrg.example.com
- zookeeper2.ordererOrg.example.com
environment:
- KAFKA_BROKER_ID=0
ports:
- '9092:9092'
- '9093:9093'
networks:
- byfn
-----------------------
Note: The same structure is being followed for:
- zookeeper1.ordererOrg. example.com
- zookeeper2.ordererOrg. example.com
And
- kafka1.ordererOrg.example.com
- kafka2.ordererOrg.example.com
- kafka3.ordererOrg.example.com
When I run the network start command I get the following error messages:
✖ Starting business network definition. This may take a minute...
Error: Error trying to start business network. Error: No valid
responses from any peers. Response from attempted peer comms was an
error: Error: REQUEST_TIMEOUT
And when I run the same network start command again, I get the following:
✖ Starting business network definition. This may take a minute...
Error: Error trying to start business network. Error: No valid
responses from any peers. Response from attempted peer comms was an
error: Error: chaincode registration failed: timeout expired while
starting chaincode tt_poc:0.0.1 for transaction
And images files are also not being created against the chaincode (BNA file) as you can see the ccenv containers and orderer logs in the image below:
And I get the following logs as well on console after peer channel create command, though channel gets created successfully:
2019-03-25 15:20:34.567 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and rderer connections initialized
2019-03-25 15:20:34.956 UTC [cli.common] readBlock -> INFO 002 Got status: &{SERVICE_UNAVAILABLE}
I tried to provide maximum information but still please let me know if you require logs of any other container as well. Thanks for your time.
I finally able to resolve this issue. There was nothing wrong with these YAML configurations. The issue was with the docker configurations that It was lacking in resources and the strange thing is that I didn't get any resources related error in any container logs file. So, I just increased CPUs and Memory settings in the docker advanced configurations like below:
And after these configurational changes, my network started successfully and working properly.
Thanks to my colleague #Rafiq who help me in sorting out this issue.