Error starting hyperledger fabcar sample application - hyperledger

I am trying to install hyperledger-fabric sample application from http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html
I am getting error similar to post mentioned here: hyperledger fabric fabcar error
2017-08-24 07:47:16.826 UTC [grpc] Printf -> DEBU 005 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.18.0.5:7051: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7051 <nil>}
Error: Error getting endorser client channel: PER:404 - Error trying to connect to local peer
Below are the logs for docker logs peer0.org1.example.com, apperantly peer is not able to connect to couchdb
2017-08-24 07:47:03.728 UTC [couchdb] handleRequest -> DEBU 011 HTTP Request: GET / HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2017-08-24 07:47:04.073 UTC [couchdb] handleRequest -> WARN 012 Retrying couchdb request in 125ms. Attempt:1 Error:Get http://couchdb:5984/: dial tcp 109.234.109.83:5984: getsockopt: connection refused
2017-08-24 07:47:04.199 UTC [couchdb] handleRequest -> DEBU 013 HTTP Request: GET / HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2017-08-24 07:47:04.385 UTC [couchdb] handleRequest -> WARN 014 Retrying couchdb request in 250ms. Attempt:2 Error:Get http://couchdb:5984/: dial tcp 109.234.109.77:5984: getsockopt: connection refused
I can see listening socket on port 5984
From docker exec -it couchdb bash
docker exec -it couchdb bash
couchdb#57c8996a4ba6:~$ netstat -ntulpa
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5984 0.0.0.0:* LISTEN 6/beam.smp
tcp 0 0 127.0.0.1:5986 0.0.0.0:* LISTEN 6/beam.smp
tcp 0 0 127.0.0.11:43471 0.0.0.0:* LISTEN -
udp 0 0 127.0.0.11:52081 0.0.0.0:* -
From command shell without docker
# netstat -ntulpa | grep 5984
tcp6 0 0 :::5984 :::* LISTEN 12877/docker-proxy
Why peer is not able to connect to couchdb?

Based on the comments, I think that your host system is configured to use a DNS search domain which automatically resolves unknown hostnames. You may need to modify the basic-network/docker-compose.yml and add dns_search: . as a config value for the peer:
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:x86_64-1.0.0
dns_search: .
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.org1.example.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.example.com
networks:
- basic

Related

Service Not accessible using Container Host Name from within the Container

We have been facing an issue while trying to spin up a local containerized confluent kafka ecosystem using docker-compose tool.
Host OS (Virtual Machine connecting through RDP)
$ sudo lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
$ uname -a
Linux IS*******1 4.15.0-189-generic #200-Ubuntu SMP Wed Jun 22 19:53:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
ghosh.sayak#ISRXDLKFD001:~$
Docker Engine
$ sudo docker version
Client: Docker Engine - Community
Version: 20.10.20
API version: 1.41
Go version: go1.18.7
Git commit: 9fdeb9c
Built: Tue Oct 18 18:20:19 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.20
API version: 1.41 (minimum version 1.12)
Go version: go1.18.7
Git commit: 03df974
Built: Tue Oct 18 18:18:11 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.16
GitCommit: 31aa4358a36870b21a992d3ad2bef29e1d693bec
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker-compose.yaml (Compose created network "kafka_default" using default bridge driver)
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:7.3.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:7.3.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
connect:
image: cnfldemos/cp-server-connect-datagen:0.6.0-7.3.0
hostname: connect
container_name: connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-6.2.0.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java/ConnectPlugin"
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
CONNECT_TOPIC_CREATION_ENABLE: 'false'
volumes:
- ./connect-plugins:/usr/share/java/ConnectPlugin
control-center:
image: confluentinc/cp-enterprise-control-center:7.3.0
hostname: control-center
container_name: control-center
depends_on:
- broker
- schema-registry
- connect
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083'
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
Now, while executing the command $ sudo docker-compose up -d, we could only observed that the "zookeeper" container is only getting Up and the "broker" i.e. Kafka container is Exited with Status 1 with the following error -
$ sudo docker container logs -t --details broker
2023-02-08T14:07:30.660123686Z ===> User
2023-02-08T14:07:30.662518281Z uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
2023-02-08T14:07:30.666677333Z ===> Configuring ...
2023-02-08T14:07:34.144645597Z ===> Running preflight checks ...
2023-02-08T14:07:34.147953062Z ===> Check if /var/lib/kafka/data is writable ...
2023-02-08T14:07:34.564183498Z ===> Check if Zookeeper is healthy ...
2023-02-08T14:07:35.861623527Z [2023-02-08 14:07:35,856] INFO Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861652177Z [2023-02-08 14:07:35,857] INFO Client environment:host.name=broker (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861683838Z [2023-02-08 14:07:35,857] INFO Client environment:java.version=11.0.16.1 (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861689083Z [2023-02-08 14:07:35,857] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861694000Z [2023-02-08 14:07:35,857] INFO Client environment:java.home=/usr/lib/jvm/zulu11-ca (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861703288Z [2023-02-08 14:07:35,857] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.3.0.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.14.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.13.2.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/snakeyaml-1.30.jar:/usr/share/java/cp-base-new/utility-belt-7.3.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.6.3.jar:/usr/share/java/cp-base-new/audience-annotations-0.5.0.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.6.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.3.0-ccs.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.3.0-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/common-utils-7.3.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.13.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.10.jar:/usr/share/java/cp-base-new/zookeeper-3.6.3.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.3.0-ccs.jar:/usr/share/java/cp-base-new/kafka-server-common-7.3.0-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jose4j-0.7.9.jar:/usr/share/java/cp-base-new/snappy-java-1.1.8.4.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.5.jar:/usr/share/java/cp-base-new/scala-library-2.13.5.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/jackson-core-2.13.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-clients-7.3.0-ccs.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.13.2.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.13.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.3.0-ccs.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.13.2.jar:/usr/share/java/cp-base-new/kafka_2.13-7.3.0-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.13.2.2.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.10.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/reload4j-1.2.19.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861725300Z [2023-02-08 14:07:35,857] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861730555Z [2023-02-08 14:07:35,857] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861734946Z [2023-02-08 14:07:35,857] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861740934Z [2023-02-08 14:07:35,857] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861745386Z [2023-02-08 14:07:35,857] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861749485Z [2023-02-08 14:07:35,857] INFO Client environment:os.version=4.15.0-189-generic (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861754326Z [2023-02-08 14:07:35,857] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862778083Z [2023-02-08 14:07:35,857] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862781895Z [2023-02-08 14:07:35,857] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862785144Z [2023-02-08 14:07:35,857] INFO Client environment:os.memory.free=242MB (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862788600Z [2023-02-08 14:07:35,857] INFO Client environment:os.memory.max=4006MB (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862792192Z [2023-02-08 14:07:35,857] INFO Client environment:os.memory.total=252MB (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.875374527Z [2023-02-08 14:07:35,869] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher#3c0a50da (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.875395839Z [2023-02-08 14:07:35,873] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
2023-02-08T14:07:35.898794157Z [2023-02-08 14:07:35,894] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket)
2023-02-08T14:07:35.910713716Z [2023-02-08 14:07:35,905] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:36.094475506Z [2023-02-08 14:07:36,087] INFO Opening socket connection to server zookeeper/172.19.0.2:2181. (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:36.094498351Z [2023-02-08 14:07:36,090] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:36.114680668Z [2023-02-08 14:07:36,111] ERROR Unable to open socket to zookeeper/172.19.0.2:2181 (org.apache.zookeeper.ClientCnxnSocketNIO)
2023-02-08T14:07:36.121566533Z [2023-02-08 14:07:36,112] WARN Session 0x0 for sever zookeeper/172.19.0.2:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:36.121606559Z java.net.ConnectException: Connection refused
2023-02-08T14:07:36.121610572Z at java.base/sun.nio.ch.Net.connect0(Native Method)
2023-02-08T14:07:36.121614024Z at java.base/sun.nio.ch.Net.connect(Net.java:483)
2023-02-08T14:07:36.121617448Z at java.base/sun.nio.ch.Net.connect(Net.java:472)
2023-02-08T14:07:36.121620610Z at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:692)
2023-02-08T14:07:36.121623850Z at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:260)
2023-02-08T14:07:36.121627034Z at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:270)
2023-02-08T14:07:36.121630306Z at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1177)
2023-02-08T14:07:36.121633562Z at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
2023-02-08T14:07:37.230453807Z [2023-02-08 14:07:37,224] INFO Opening socket connection to server zookeeper/172.19.0.2:2181. (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:37.230488726Z [2023-02-08 14:07:37,224] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:37.230492954Z [2023-02-08 14:07:37,224] ERROR Unable to open socket to zookeeper/172.19.0.2:2181 (org.apache.zookeeper.ClientCnxnSocketNIO)
2023-02-08T14:07:37.230496368Z [2023-02-08 14:07:37,225] WARN Session 0x0 for sever zookeeper/172.19.0.2:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
To debug the connectivity, I have done couple of flight checks like -
Created a netshoot container within same "kafka_default" network using sudo docker run -it --net container:zookeeper nicolaka/netshoot
Some useful information -
zookeeper# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.19.0.2 zookeeper
zookeeper# cat /etc/resolv.conf
search corp.***.com
nameserver 127.0.0.11
options ndots:0
Tried nslookup, netcat, ping, nmap, telnet from within the container and the results are following -
zookeeper# nslookup zookeeper
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: zookeeper
Address: 172.19.0.2
zookeeper# nc -v -l -p 2181
nc: Address in use
zookeeper# ping -c 2 zookeeper
PING zookeeper (172.19.0.2) 56(84) bytes of data.
64 bytes from zookeeper (172.19.0.2): icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from zookeeper (172.19.0.2): icmp_seq=2 ttl=64 time=0.049 ms
--- zookeeper ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1015ms
rtt min/avg/max/mdev = 0.049/0.051/0.054/0.002 ms
zookeeper# nmap -p0- -v -A -T4 zookeeper
Starting Nmap 7.93 ( https://nmap.org ) at 2023-02-08 15:43 UTC
NSE: Loaded 155 scripts for scanning.
NSE: Script Pre-scanning.
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating SYN Stealth Scan at 15:43
Scanning zookeeper (172.19.0.2) [65536 ports]
Completed SYN Stealth Scan at 15:43, 2.96s elapsed (65536 total ports)
Initiating Service scan at 15:43
Initiating OS detection (try #1) against zookeeper (172.19.0.2)
Retrying OS detection (try #2) against zookeeper (172.19.0.2)
NSE: Script scanning 172.19.0.2.
Initiating NSE at 15:43
Completed NSE at 15:43, 0.01s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Nmap scan report for zookeeper (172.19.0.2)
Host is up (0.000060s latency).
Not shown: 65533 closed tcp ports (reset)
PORT STATE SERVICE VERSION
2181/tcp filtered eforward
8080/tcp filtered http-proxy
39671/tcp filtered unknown
Too many fingerprints match this host to give specific OS details
Network Distance: 0 hops
NSE: Script Post-scanning.
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Read data files from: /usr/bin/../share/nmap
OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 5.55 seconds
Raw packets sent: 65551 (2.885MB) | Rcvd: 131094 (5.508MB)
zookeeper# telnet localhost 2181
Connected to localhost
***zookeeper# telnet zookeeper 2181
telnet: can't connect to remote host (172.19.0.2): Connection refused***
BUT, WHEN, tried from Host Machine, it succeeded -
ghosh.sayak#IS********1:~/Kafka$ telnet localhost 2181
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
^]
telnet> q
Connection closed.
Also, here is the current status of the container -
ghosh.sayak#IS********1:~/Kafka$ sudo docker-compose ps
[sudo] password for ghosh.sayak:
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------
broker /etc/confluent/docker/run Exit 1
connect /etc/confluent/docker/run Exit 1
control-center /etc/confluent/docker/run Exit 1
schema-registry /etc/confluent/docker/run Exit 1
zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp,:::2181->2181/tcp, 2888/tcp, 3888/tcp
NOTE: Docker Engine has been freshly installed by following this official documentation.
Stuck with this issue! Any help will be much appreciated!
Thanks in advance!

Issues while Setting up hyperledger fabric 2.0 in different container ports (Testing & Devlopment)

I have been working on Hyperledger fabric 2.0 Multi-Org Networking running under default ports. The setup is as follows:
Org1 ( Peer0:7051, Peer1:8051, CA: 7054 ,couchdb0:5984, couchdb1:6984:5984)
Org2 ( Peer0:9051, Peer1:10051, CA: 8054,couchdb2:7984:5984, couchdb3:8984:5984)
Orderer (0rderer1:7050, Orderer2:8050, Orderer3: 9050) RAFT Mechanism
The requirement is to redefine all the container ports mentioned above so that I can run the same Fabric application as two environments ( One for Testing(Stable version) and one for Development )
I tried to change the ports (Specifying environmental variables for ports in docker-compose) of Peers, orderers, CA. But I don't have any option for the CouchDB which always has the default port(5984)
Is there any way to achieve this? so that it will also be helpful in running two different fabric applications in the same virtual machine
EDIT1:
My docker-compose.yaml file (I have only mentioned for- Org1(Peer0,peer1), Orderer1,ca-org1, couchdb0,couchdb1)
version: "2"
networks:
test2:
services:
ca-org1:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.org1.test.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/priv_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-tls/tlsca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-tls/priv_sk
ports:
- "3054:3054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/ca/:/etc/hyperledger/fabric-ca-server-config
- ./channel/crypto-config/peerOrganizations/org1.test.com/tlsca/:/etc/hyperledger/fabric-ca-server-tls
container_name: ca.org1.test.com
hostname: ca.org1.test.com
networks:
- test2
orderer.test.com:
container_name: orderer.test.com
image: hyperledger/fabric-orderer:2.1
dns_search: .
environment:
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_METRICS_PROVIDER=prometheus
- ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:3443
- ORDERER_GENERAL_LISTENPORT=3050
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 3050:3050
- 3443:3443
networks:
- test2
volumes:
- ./channel/genesis.block:/var/hyperledger/orderer/genesis.block
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/msp:/var/hyperledger/orderer/msp
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/tls:/var/hyperledger/orderer/tls
couchdb0:
container_name: couchdb0-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 1984:1984
networks:
- test2
couchdb1:
container_name: couchdb1-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 2984:1984
networks:
- test2
peer0.org1.test.com:
container_name: peer0.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer0.org1.test.com
- CORE_PEER_ADDRESS=peer0.org1.test.com:3051
- CORE_PEER_LISTENADDRESS=0.0.0.0:3051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.test.com:3052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:3052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.test.com:3051
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
depends_on:
- couchdb0
ports:
- 3051:3051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
peer1.org1.test.com:
container_name: peer1.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=debug
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer1.org1.test.com
- CORE_PEER_ADDRESS=peer1.org1.test.com:4051
- CORE_PEER_LISTENADDRESS=0.0.0.0:4051
- CORE_PEER_CHAINCODEADDRESS=peer1.org1.test.com:4052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:4052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.test.com:3051
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
ports:
- 4051:4051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
Thanks for the suggestions regarding couchDB. I had a thought that we should only specify the default couchDB port each instance. Anyway I missed the step of changing the container name in the first place (default peer0.org1.example.com to peer0.org1.test.com) I was able to start the docker containers with new container names so that it doesn't stop(recreate) the existing containers which is already running on the actual ports.
The issue which I am facing now is peer is not able to communicate with the couchdb-test url
U 04c Entering VerifyCouchConfig()
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04d Entering handleRequest() method=GET url=http://couchdb1-test:1984/ dbName=
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04e Request URL: http://couchdb1-test:1984/
2020-08-12 11:22:45.011 UTC [couchdb] handleRequest -> WARN 04f Retrying couchdb request in 125ms. Attempt:1 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.137 UTC [couchdb] handleRequest -> WARN 050 Retrying couchdb request in 250ms. Attempt:2 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.389 UTC [couchdb] handleRequest -> WARN 051 Retrying couchdb request in 500ms. Attempt:3 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
Hence if I try to create a channel, peer container exits even though it was running till now and it's not able to join the channel
2020-08-12 10:58:29.264 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:29.301 UTC [cli.common] readBlock -> INFO 002 Expect block, but got status: &{NOT_FOUND}
2020-08-12 10:58:29.305 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2020-08-12 10:58:29.506 UTC [cli.common] readBlock -> INFO 004 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.509 UTC [channelCmd] InitCmdFactory -> INFO 005 Endorser and orderer connections initialized
2020-08-12 10:58:29.710 UTC [cli.common] readBlock -> INFO 006 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.713 UTC [channelCmd] InitCmdFactory -> INFO 007 Endorser and orderer connections initialized
2020-08-12 10:58:29.916 UTC [cli.common] readBlock -> INFO 008 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.922 UTC [channelCmd] InitCmdFactory -> INFO 009 Endorser and orderer connections initialized
2020-08-12 10:58:30.123 UTC [cli.common] readBlock -> INFO 00a Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.126 UTC [channelCmd] InitCmdFactory -> INFO 00b Endorser and orderer connections initialized
2020-08-12 10:58:30.327 UTC [cli.common] readBlock -> INFO 00c Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.331 UTC [channelCmd] InitCmdFactory -> INFO 00d Endorser and orderer connections initialized
2020-08-12 10:58:30.534 UTC [cli.common] readBlock -> INFO 00e Received block: 0
Error: error getting endorser client for channel: endorser client failed to connect to localhost:3051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:53668->127.0.0.1:3051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:4051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:60724->127.0.0.1:4051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:5051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:57948->127.0.0.1:5051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:6051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:58976->127.0.0.1:6051: read: connection reset by peer"
2020-08-12 10:58:37.518 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.552 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
2020-08-12 10:58:37.685 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.763 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
Here, only the Orderers are successfully added to the channel but not the peers even after changing the ports.
This isnt an issue, you can just specify it as you did for the others like this. Are you facing some specific issues while mapping the ports
ports:
- 6984:5984 # Mapping Host Port to Container Port
You can change the couchDb port from the docker-compose file.
Showing a snippet from docekr-compose.yaml file.
couchdb0:
container_name: couchdb0
image: couchdb:2.3
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
# Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
# for example map it to utilize Fauxton User Interface in dev environments.
ports:
- "5984:5984"
networks:
- byfn
From here you can change ports easily.

dial tcp 172.28.0.4:5983: getsockopt: connection refused

I am using fabric-dev-servers and docker-compose to create a network and install a business card, but when I do ./startFabric.sh I get the following error at the end of starting the network.
2020-05-18 07:29:59.593 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2020-05-18 07:29:59.595 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.28.0.5:7051: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7051 <nil>}
2020-05-18 07:30:00.596 UTC [grpc] Printf -> DEBU 004 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.28.0.5:7051: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7051 <nil>}
2020-05-18 07:30:02.302 UTC [grpc] Printf -> DEBU 005 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.28.0.5:7051: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7051 <nil>}
Error: Error getting endorser client channel: endorser client failed to connect to peer0.org1.example.com:7051: failed to create new connection: context deadline exceeded
and when I do docker logs peer0.org1.example.com
I get the following log
2020-05-18 07:29:15.885 UTC [couchdb] handleRequest -> DEBU 023 HTTP Request: GET / HTTP/1.1 | Host: couchdb:5983 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2020-05-18 07:29:15.886 UTC [couchdb] handleRequest -> WARN 024 Retrying couchdb request in 16s. Attempt:8 Error:Get http://couchdb:5983/: dial tcp 172.28.0.4:5983: getsockopt: connection refused
2020-05-18 07:29:31.887 UTC [couchdb] handleRequest -> DEBU 025 HTTP Request: GET / HTTP/1.1 | Host: couchdb:5983 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2020-05-18 07:29:31.888 UTC [couchdb] handleRequest -> WARN 026 Retrying couchdb request in 32s. Attempt:9 Error:Get http://couchdb:5983/: dial tcp 172.28.0.4:5983: getsockopt: connection refused
2020-05-18 07:30:03.889 UTC [couchdb] handleRequest -> DEBU 027 HTTP Request: GET / HTTP/1.1 | Host: couchdb:5983 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2020-05-18 07:30:03.890 UTC [couchdb] handleRequest -> WARN 028 Retrying couchdb request in 1m4s. Attempt:10 Error:Get http://couchdb:5983/: dial tcp 172.28.0.4:5983: getsockopt: connection refused
2020-05-18 07:31:07.891 UTC [couchdb] handleRequest -> DEBU 029 HTTP Request: GET / HTTP/1.1 | Host: couchdb:5983 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2020-05-18 07:31:07.893 UTC [couchdb] handleRequest -> WARN 02a Retrying couchdb request in 2m8s. Attempt:11 Error:Get http://couchdb:5983/: dial tcp 172.28.0.4:5983: getsockopt: connection refused
this is the docker-compose.yml of hlfv11
version: '2'
services:
ca.org1.example.com:
image: hyperledger/fabric-ca:$ARCH-1.1.0
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.org1.example.com
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/19ab65abbb04807dad12e4c0a9aaa6649e70868e3abd0217a322d89e47e1a6ae_sk -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca.org1.example.com
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer:$ARCH-1.1.0
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/composer-genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
ports:
- 7050:7050
volumes:
- ./:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/etc/hyperledger/msp/orderer/msp
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:$ARCH-1.1.0
dns_search: .
environment:
- CORE_LOGGING_LEVEL=debug
- GODEBUG=netdns=go
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_PEER=debug # addition
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=composer_default
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp
# - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hlfv11_basic
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5983
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
ports:
- 7051:7051
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./:/etc/hyperledger/configtx
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/peer/msp
- ./crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users
depends_on:
- orderer.example.com
- couchdb
#networks:
# - basic
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb:$ARCH-0.4.6
ports:
- 5983:5983
environment:
DB_URL: http://localhost:5983/member_db
I tried all the solutions possible, but still not working, please help.
you have not defined couchdb username and password in the peer service
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=

Docker/zookeeper Will not attempt to authenticate using SASL

Good Day,
I wanted to test the config store which is built using spring boot. The instruction given to me is run the project using docker-compose.yml files. I'm new with this,I've tired to execute but while running those commands on iMAC terminal I'm facing the following exception.
platform-config-store | 2018-03-05 11:55:12.167 INFO 1 --- [ main] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#22bbbe6
platform-config-store | 2018-03-05 11:55:12.286 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
platform-config-store | 2018-03-05 11:55:12.314 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
platform-config-store | java.net.ConnectException: Connection refused
platform-config-store | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_144]
platform-config-store | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_144]
platform-config-store | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
platform-config-store | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
platform-config-store |
platform-config-store | 2018-03-05 11:55:13.422 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
platform-config-store | 2018-03-05 11:55:13.424 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
I've googled this problem and on some posts it was mentioned that zookeeper client server is not available that's why this error is occurring. So for this I've configured the zookeeper local instance on my machine and made changes in docker-compose.yml file. Instead of getting the image from docker, I tried to get it from local machine. It didn't work and faced the same issue.
Also some of them posted that this related to the firewall. I've verified and firewall's turned off.
Following is the docker-compose file I'm executing.
docker-compose.yml
version: "3.0"
services:
zookeeper:
container_name: zookeeper
image: docker.*****.net/zookeeper
#image: zookeeper // tired to connect with local zookeeper instance
ports:
- 2181:2181
postgres:
container_name: postgres
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=p3rmission
redis:
container_name: redis
image: redis
ports:
- 6379:6379
Could anyone please guide me, what I'm missing here. Help will be appreciated. Thanks

Clair startup error grpc: addrConn.resetTransport failed to create client transport: connection error

I am trying to run the clair docker image quay.io/coreos/clair-git:latest using docker-compose. when start the container it started throwing this message. not getting any response for name space query curl -v http://localhost:6060/v1/namespaces getting 404.
clair_1 | {"Event":"pagination key is empty, generating...","Level":"warning","Location":"config.go:110","Time":"2018-02-08 20:46:49.733074"}
clair_1 | {"Detectors":"apt-sources,lsb-release,os-release,redhat-release,alpine-release","Event":"Clair registered components","Level":"info","Listers":"apk,dpkg,rpm","Location":"main.go:103","Time":"2018-02-08 20:46:49.733721","Updaters":"alpine,debian,oracle,rhel,ubuntu"}
clair_1 | {"Event":"running database migrations","Level":"info","Location":"pgsql.go:270","Time":"2018-02-08 20:46:49.739997"}
clair_1 | {"Event":"database migration ran successfully","Level":"info","Location":"pgsql.go:277","Time":"2018-02-08 20:46:49.744277"}
clair_1 | {"Event":"starting grpc server","Level":"info","Location":"server.go:155","Time":"2018-02-08 20:46:49.744700","addr":"[::]:6060"}
clair_1 | {"Event":"grpc server is configured without client certificate authentication","Level":"warning","Location":"server.go:199","Time":"2018-02-08 20:46:49.745422"}
clair_1 | {"Event":"notifier service is disabled","Level":"info","Location":"notifier.go:76","Time":"2018-02-08 20:46:49.745800"}
clair_1 | 2018/02/08 20:46:49 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp [::]:6060: connect: network is unreachable"; Reconnecting to {[::]:6060 <nil>}
clair_1 | {"Event":"starting health API","Level":"info","Location":"api.go:62","Time":"2018-02-08 20:46:49.746259","addr":"0.0.0.0:6061"}
clair_1 | {"Event":"updater service started","Level":"info","Location":"updater.go:91","Time":"2018-02-08 20:46:49.746437","lock identifier":"911feae4-9a65-4317-9676-8c65f4404e76"}
clair_1 | 2018/02/08 20:46:50 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp [::]:6060: connect: network is unreachable"; Reconnecting to {[::]:6060 <nil>}
clair_1 | 2018/02/08 20:46:52 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp [::]:6060: connect: network is unreachable"; Reconnecting to {[::]:6060 <nil>}
clair_1 | 2018/02/08 20:46:55 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp [::]:6060: connect: network is unreachable"; Reconnecting to {[::]:6060 <nil>}
Here is my working docker-compose.yml
version: '2'
services:
clair:
container_name: clair_clair
image: quay.io/coreos/clair:v2.0.1
restart: unless-stopped
ports:
- "6060-6061:6060-6061"
volumes:
- /tmp:/tmp
- ./clair_config:/config
command: [-config, /config/config.yaml]
Try running the stable image instead: quay.io/coreos/clair:v2.0.1 I noticed the same errors until I changed the image.

Resources