I am trying to instantiate an installed chaincode using the "Peer Chaincode Instantiate" command (as below). On execution of the command, I am receiving the following error message:
Command to instantiate chaincode:
peer chaincode instantiate -o orderer.proofofownership.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/msp/tlscacerts/tlsca.proofofownership.com-cert.pem -C dmanddis -n CreateDiamond -v 1.0 -c '{"Args":[]}' -P "OR ('DiamondManufacturerMSP.peer','DistributorMSP.peer')"
Error Message received:
Error: Error endorsing chaincode: rpc error: code = Unknown desc = timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:1a96ecc8763e214ee543ecefe214df6025f8e98f2449f2b7877d04655ddadb49)
I tried rectifying this issue by adding the following attributes in "peer-base.yaml file"
- CORE_CHAINCODE_EXECUTETIMEOUT=300s
- CORE_CHAINCODE_DEPLOYTIMEOUT=300s
Although, I am still receiving this particular error.
Following are my docker container configurations:
peer-base.yaml File:
services:
peer-base:
image: hyperledger/fabric-peer:x86_64-1.1.0
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=proof_of_ownership_pow
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
#- CORE_LOGGING_LEVEL=INFO
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_CHAINCODE_EXECUTETIMEOUT=300s
- CORE_CHAINCODE_DEPLOYTIMEOUT=300s
#- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
cli - container configuration in "docker-compose-cli.yaml" file:
cli:
container_name: cli
image: hyperledger/fabric-tools:x86_64-1.1.0
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
#- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.dm.proofofownership.com:7051
- CORE_PEER_LOCALMSPID=DiamondManufacturerMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=host
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
#- ./../chaincode/:/opt/gopath/src/github.com/chaincode
#- ./chaincode/CreateDiamond/go:/opt/gopath/src/github.com/chaincode/
- ./chaincode/CreateDiamond:/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.proofofownership.com
- peer0.dm.proofofownership.com
- peer1.dm.proofofownership.com
- peer0.dist.proofofownership.com
- peer1.dist.proofofownership.com
#network_mode: host
networks:
- pow
peer configuration in "docker-compose-base.yaml" file:
peer0.dm.proofofownership.com:
container_name: peer0.dm.proofofownership.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.dm.proofofownership.com
#- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
#- CORE_PEER_MSPCONFIGPATH=/home/john/Proof-Of-Ownership/crypto-config/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
- CORE_PEER_ADDRESS=peer0.dm.proofofownership.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.dm.proofofownership.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dm.proofofownership.com:7051
- CORE_PEER_LOCALMSPID=DiamondManufacturerMSP
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
#- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
#- CORE_PEER_TLS_ROOTCERT_FILE=/home/john/Proof-Of-Ownership/crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls:/etc/hyperledger/fabric/tls
- peer0.dm.proofofownership.com:/var/hyperledger/production
ports:
- 7051:7051
- 7053:7053
Orderer Configuration in "docker-compose-base.yaml" file:
orderer.proofofownership.com:
container_name: orderer.proofofownership.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
# CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE Newly Added
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=proof_of_ownership_pow
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
#- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
#- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
# New Addition
- CONFIGTX_ORDERER_ORDERERTYPE=solo
- CONFIGTX_ORDERER_BATCHSIZE_MAXMESSAGECOUNT=10
- CONFIGTX_ORDERER_BATCHTIMEOUT=2s
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
#working_dir: /opt/gopath/src/github.com/hyperledger/fabric
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/genesis.block
- ../crypto-config/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/tls/:/var/hyperledger/orderer/tls
- orderer.proofofownership.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
I also reviewed the peer's docker container logs (using docker logs ) and received the following logs:
Launch -> ERRO 3eb launchAndWaitForRegister failed: timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:cc34a20176d7f09e1537b039f3340450e08f6447bf16965324655e72a2a58623)
2018-08-01 12:59:08.739 UTC [endorser] simulateProposal -> ERRO 3ed [dmanddis][cc34a201] failed to invoke chaincode name:"lscc" , error: timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:cc34a20176d7f09e1537b039f3340450e08f6447bf16965324655e72a2a58623)
Following logs were received on installing chaincode:
2018-08-03 09:44:55.822 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-08-03 09:44:55.822 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-08-03 09:44:55.822 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
2018-08-03 09:44:55.822 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
2018-08-03 09:44:55.822 UTC [chaincodeCmd] getChaincodeSpec -> DEBU 005 java chaincode disabled
2018-08-03 09:44:58.270 UTC [golang-platform] getCodeFromFS -> DEBU 006 getCodeFromFS github.com/hyperledger/fabric/peer/chaincode
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 007 Discarding GOROOT package bytes
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 008 Discarding GOROOT package encoding/json
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 009 Discarding GOROOT package fmt
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00a Discarding provided package github.com/hyperledger/fabric/core/chaincode/shim
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00b Discarding provided package github.com/hyperledger/fabric/protos/peer
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00c Discarding GOROOT package strconv
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00d skipping dir: /opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/go
2018-08-03 09:45:02.090 UTC [golang-platform] GetDeploymentPayload -> DEBU 00e done
2018-08-03 09:45:02.090 UTC [container] WriteFileToPackage -> DEBU 00f Writing file to tarball: src/github.com/hyperledger/fabric/peer/chaincode/CreateDiamond.go
2018-08-03 09:45:02.122 UTC [msp/identity] Sign -> DEBU 010 Sign: plaintext: 0AE3070A5B08031A0B089EC890DB0510...EC7BFE1B0000FFFFEE433C37001C0000
2018-08-03 09:45:02.122 UTC [msp/identity] Sign -> DEBU 011 Sign: digest: E5160DE95DB096379967D959FA71E692F098983F443378600943EA5D7265A82C
2018-08-03 09:45:02.230 UTC [chaincodeCmd] install -> DEBU 012 Installed remotely response:<status:200 payload:"OK" >
2018-08-03 09:45:02.230 UTC [main] main -> INFO 013 Exiting.....
In the peer configuration, you specified a different port for the chaincode endpoint than the peer adress (chaincode endpoint port 7052, peer adress on port 7051):
CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
But this port is not exposed. Please add this to your peer port configuration:
- 7052:7052
It is likely that your chaincode is failing on start-up. You might want to try using the development mode tutorial approach to debug your chaincode. It is possible that the chaincode process is failing. By executing from within the container, you can view the logs to see what might not be working for you.
The devmode tutorial is here . You will simply need to replace the tutorial's chaincode with your own.
Related
My docker-compose.yml file for fabric-orderer deployment:
fabric-orderer:
image: hyperledger/fabric-orderer
container_name: fabric-orderer1
environment:
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_BOOTSTRAPMETHOD=file
- ORDERER_GENERAL_BOOTSTRAPFILE=/mnt/b/Desktop/HFForMastersWork/genesis-blocks/genesis1.block
- ORDERER_GENERAL_LOCALMSPDIR=/mnt/b/Desktop/HFForMastersWork/organizations/ordererOrgs/orderer1/msp
- ORDERER_GENERAL_LOCALMSPID=orderer1MSP
command: orderer
ports:
- 7050:7050
volumes:
- ./genesis.block:/mnt/b/Desktop/HFForMastersWork/genesis-blocks/genesis1.block
I always receive next error:
fabric-orderer1 | 2021-07-02 10:38:03.671 UTC [orderer.common.server] loadLocalMSP -> PANI 003 Failed to get local msp config: could not load a valid signer certificate from directory /mnt/b/Desktop/HFForMastersWork/organizations/ordererOrgs/orderer1/msp/signcerts: stat /mnt/b/Desktop/HFForMastersWork/organizations/ordererOrgs/orderer1/msp/signcerts: no such file or directory
fabric-orderer1 | panic: Failed to get local msp config: could not load a valid signer certificate from directory /mnt/b/Desktop/HFForMastersWork/organizations/ordererOrgs/orderer1/msp/signcerts: stat /mnt/b/Desktop/HFForMastersWork/organizations/ordererOrgs/orderer1/msp/signcerts: no such file or directory
But I checked all paths and all of them correct. Help me, please. Where is my error?
I have been working on Hyperledger fabric 2.0 Multi-Org Networking running under default ports. The setup is as follows:
Org1 ( Peer0:7051, Peer1:8051, CA: 7054 ,couchdb0:5984, couchdb1:6984:5984)
Org2 ( Peer0:9051, Peer1:10051, CA: 8054,couchdb2:7984:5984, couchdb3:8984:5984)
Orderer (0rderer1:7050, Orderer2:8050, Orderer3: 9050) RAFT Mechanism
The requirement is to redefine all the container ports mentioned above so that I can run the same Fabric application as two environments ( One for Testing(Stable version) and one for Development )
I tried to change the ports (Specifying environmental variables for ports in docker-compose) of Peers, orderers, CA. But I don't have any option for the CouchDB which always has the default port(5984)
Is there any way to achieve this? so that it will also be helpful in running two different fabric applications in the same virtual machine
EDIT1:
My docker-compose.yaml file (I have only mentioned for- Org1(Peer0,peer1), Orderer1,ca-org1, couchdb0,couchdb1)
version: "2"
networks:
test2:
services:
ca-org1:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.org1.test.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/priv_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-tls/tlsca.org1.test.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-tls/priv_sk
ports:
- "3054:3054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/ca/:/etc/hyperledger/fabric-ca-server-config
- ./channel/crypto-config/peerOrganizations/org1.test.com/tlsca/:/etc/hyperledger/fabric-ca-server-tls
container_name: ca.org1.test.com
hostname: ca.org1.test.com
networks:
- test2
orderer.test.com:
container_name: orderer.test.com
image: hyperledger/fabric-orderer:2.1
dns_search: .
environment:
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_METRICS_PROVIDER=prometheus
- ORDERER_OPERATIONS_LISTENADDRESS=0.0.0.0:3443
- ORDERER_GENERAL_LISTENPORT=3050
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 3050:3050
- 3443:3443
networks:
- test2
volumes:
- ./channel/genesis.block:/var/hyperledger/orderer/genesis.block
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/msp:/var/hyperledger/orderer/msp
- ./channel/crypto-config/ordererOrganizations/test.com/orderers/orderer.test.com/tls:/var/hyperledger/orderer/tls
couchdb0:
container_name: couchdb0-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 1984:1984
networks:
- test2
couchdb1:
container_name: couchdb1-test
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 2984:1984
networks:
- test2
peer0.org1.test.com:
container_name: peer0.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer0.org1.test.com
- CORE_PEER_ADDRESS=peer0.org1.test.com:3051
- CORE_PEER_LISTENADDRESS=0.0.0.0:3051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.test.com:3052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:3052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.test.com:3051
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
depends_on:
- couchdb0
ports:
- 3051:3051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer0.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
peer1.org1.test.com:
container_name: peer1.org1.test.com
extends:
file: base.yaml
service: peer-base
environment:
- FABRIC_LOGGING_SPEC=DEBUG
- ORDERER_GENERAL_LOGLEVEL=debug
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test2
- CORE_PEER_ID=peer1.org1.test.com
- CORE_PEER_ADDRESS=peer1.org1.test.com:4051
- CORE_PEER_LISTENADDRESS=0.0.0.0:4051
- CORE_PEER_CHAINCODEADDRESS=peer1.org1.test.com:4052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:4052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.test.com:4051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.test.com:3051
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1-test:1984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
ports:
- 4051:4051
volumes:
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/msp:/etc/hyperledger/crypto/peer/msp
- ./channel/crypto-config/peerOrganizations/org1.test.com/peers/peer1.org1.test.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./channel/:/etc/hyperledger/channel/
networks:
- test2
Thanks for the suggestions regarding couchDB. I had a thought that we should only specify the default couchDB port each instance. Anyway I missed the step of changing the container name in the first place (default peer0.org1.example.com to peer0.org1.test.com) I was able to start the docker containers with new container names so that it doesn't stop(recreate) the existing containers which is already running on the actual ports.
The issue which I am facing now is peer is not able to communicate with the couchdb-test url
U 04c Entering VerifyCouchConfig()
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04d Entering handleRequest() method=GET url=http://couchdb1-test:1984/ dbName=
2020-08-12 11:22:45.010 UTC [couchdb] handleRequest -> DEBU 04e Request URL: http://couchdb1-test:1984/
2020-08-12 11:22:45.011 UTC [couchdb] handleRequest -> WARN 04f Retrying couchdb request in 125ms. Attempt:1 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.137 UTC [couchdb] handleRequest -> WARN 050 Retrying couchdb request in 250ms. Attempt:2 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
2020-08-12 11:22:45.389 UTC [couchdb] handleRequest -> WARN 051 Retrying couchdb request in 500ms. Attempt:3 Error:Get "http://couchdb1-test:1984/": dial tcp 172.27.0.11:1984: connect: connection refused
Hence if I try to create a channel, peer container exits even though it was running till now and it's not able to join the channel
2020-08-12 10:58:29.264 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:29.301 UTC [cli.common] readBlock -> INFO 002 Expect block, but got status: &{NOT_FOUND}
2020-08-12 10:58:29.305 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2020-08-12 10:58:29.506 UTC [cli.common] readBlock -> INFO 004 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.509 UTC [channelCmd] InitCmdFactory -> INFO 005 Endorser and orderer connections initialized
2020-08-12 10:58:29.710 UTC [cli.common] readBlock -> INFO 006 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.713 UTC [channelCmd] InitCmdFactory -> INFO 007 Endorser and orderer connections initialized
2020-08-12 10:58:29.916 UTC [cli.common] readBlock -> INFO 008 Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:29.922 UTC [channelCmd] InitCmdFactory -> INFO 009 Endorser and orderer connections initialized
2020-08-12 10:58:30.123 UTC [cli.common] readBlock -> INFO 00a Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.126 UTC [channelCmd] InitCmdFactory -> INFO 00b Endorser and orderer connections initialized
2020-08-12 10:58:30.327 UTC [cli.common] readBlock -> INFO 00c Expect block, but got status: &{SERVICE_UNAVAILABLE}
2020-08-12 10:58:30.331 UTC [channelCmd] InitCmdFactory -> INFO 00d Endorser and orderer connections initialized
2020-08-12 10:58:30.534 UTC [cli.common] readBlock -> INFO 00e Received block: 0
Error: error getting endorser client for channel: endorser client failed to connect to localhost:3051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:53668->127.0.0.1:3051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:4051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:60724->127.0.0.1:4051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:5051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:57948->127.0.0.1:5051: read: connection reset by peer"
Error: error getting endorser client for channel: endorser client failed to connect to localhost:6051: failed to create new connection: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:58976->127.0.0.1:6051: read: connection reset by peer"
2020-08-12 10:58:37.518 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.552 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
2020-08-12 10:58:37.685 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-08-12 10:58:37.763 UTC [channelCmd] update -> INFO 002 Successfully submitted channel update
Here, only the Orderers are successfully added to the channel but not the peers even after changing the ports.
This isnt an issue, you can just specify it as you did for the others like this. Are you facing some specific issues while mapping the ports
ports:
- 6984:5984 # Mapping Host Port to Container Port
You can change the couchDb port from the docker-compose file.
Showing a snippet from docekr-compose.yaml file.
couchdb0:
container_name: couchdb0
image: couchdb:2.3
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
# Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
# for example map it to utilize Fauxton User Interface in dev environments.
ports:
- "5984:5984"
networks:
- byfn
From here you can change ports easily.
I created a small HyperLedger Fabric Network where I have a single channel with a single organization and a few peers along with an ordering service.
After going through the normal steps of creating my cryptographic materials , genesis block and channel.tx file I tried to create my channel in a cli container using the command:
peer channel create -o orderer.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
After this, I received the following error:
Error: got unexpected status: FORBIDDEN -- Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
A part of the output of the cli container logs file:
2019-02-15 20:14:57.323 UTC [orderer/common/server] Start -> INFO 0ab Beginning to serve requests
2019-02-15 20:15:00.063 UTC [orderer/common/server] Deliver -> DEBU 0ac Starting new Deliver handler
2019-02-15 20:15:00.064 UTC [common/deliver] Handle -> DEBU 0ad Starting new deliver loop for 192.168.176.6:38938
2019-02-15 20:15:00.064 UTC [common/deliver] Handle -> DEBU 0ae Attempting to read seek info message from 192.168.176.6:38938
2019-02-15 20:15:00.068 UTC [orderer/common/server] Broadcast -> DEBU 0af Starting new Broadcast handler
2019-02-15 20:15:00.068 UTC [orderer/common/broadcast] Handle -> DEBU 0b0 Starting new broadcast loop for 192.168.176.6:38940
2019-02-15 20:15:00.068 UTC [orderer/common/broadcast] Handle -> DEBU 0b1 [channel: mychannel] Broadcast is processing config update message from 192.168.176.6:38940
2019-02-15 20:15:00.068 UTC [orderer/common/msgprocessor] ProcessConfigUpdateMsg -> DEBU 0b2 Processing config update tx with system channel message processor for channel ID mychannel
2019-02-15 20:15:00.068 UTC [orderer/common/msgprocessor] ProcessConfigUpdateMsg -> DEBU 0b3 Processing config update message for channel mychannel
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b4 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers ==
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b5 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b6 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers ==
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b7 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b8 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererOrg/Writers ==
2019-02-15 20:15:00.069 UTC [msp] DeserializeIdentity -> DEBU 0b9 Obtaining identity
2019-02-15 20:15:00.069 UTC [msp/identity] newIdentity -> DEBU 0ba Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICEzCCAbmgAwIBAgIQSNAnza0BnDG0ZBvOSPenpDAKBggqhkjOPQQDAjBvMQsw
(LONG TEXTS)9XYOAcEPDg==
-----END CERTIFICATE-----
2019-02-15 20:15:00.069 UTC [cauthdsl] func1 -> DEBU 0bb 0xc42016e118 gate 1550261700069869014 evaluation starts
2019-02-15 20:15:00.069 UTC [cauthdsl] func2 -> DEBU 0bc 0xc42016e118 signed by 0 principal evaluation starts (used [false])
2019-02-15 20:15:00.069 UTC [cauthdsl] func2 -> DEBU 0bd 0xc42016e118 processing identity 0 with bytes of 0a05646c4d5350128e062d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d494943457a434341626d674177494241674951534e416e7a6130426e4447305a42764f5350656e7044414b42676771686b6a4f50515144416a42764d5173770a435159445651514745774a56557a45544d4245474131554543424d4b5132467361575a76636d3570595445574d4251474131554542784d4e5532467549455a790a5957356a61584e6a627a45584d4255474131554543684d4f5a4777755a586868625842735a53356a62323078476a415942674e5642414d5445574e684c6d52730a4c6d56345957317762475575593239744d423458445445354d4449784e5449774d5441774d466f58445449354d4449784d6a49774d5441774d466f775754454c0a4d416b474131554542684d4356564d78457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e564241635444564e68626942470a636d467559326c7a593238784854416242674e5642414d4d4645466b62576c75514752734c6d56345957317762475575593239744d466b77457759484b6f5a490a7a6a3043415159494b6f5a497a6a304441516344516741456431357541374a6d464176472f78416c786f567977765459595a4f717677592b36307a37526d59650a743775617553467879777965724456453249497669374f2b626e56463163564d4a544b2b374775434531706c56614e4e4d45737744675944565230504151482f0a42415144416765414d41774741315564457745422f7751434d4141774b7759445652306a42435177496f41673370722f4f6c702f4443314d477a5266525855360a4b4c75576d7353346a347959616f48633158366e306e3077436759494b6f5a497a6a3045417749445341417752514968414e46647870675247546f73545734460a734243794e35474857465539637a61354c467765613243334d6143534169414954462f54366d515872557639644c4c7231426d3342316a697470526b745579650a3958594f4163455044673d3d0a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
2019-02-15 20:15:00.070 UTC [cauthdsl] func2 -> DEBU 0be 0xc42016e118 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got dlMSP)
2019-02-15 20:15:00.070 UTC [cauthdsl] func2 -> DEBU 0bf 0xc42016e118 principal evaluation fails
2019-02-15 20:15:00.070 UTC [cauthdsl] func1 -> DEBU 0c0 0xc42016e118 gate 1550261700069869014 evaluation fails
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c1 Signature set did not satisfy policy /Channel/Orderer/OrdererOrg/Writers
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c2 == Done Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererOrg/Writers
2019-02-15 20:15:00.070 UTC [policies] func1 -> DEBU 0c3 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdererOrg.Writers ]
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c4 Signature set did not satisfy policy /Channel/Orderer/Writers
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c5 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers
2019-02-15 20:15:00.070 UTC [policies] func1 -> DEBU 0c6 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Orderer.Writers Consortiums.Writers ]
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c7 Signature set did not satisfy policy /Channel/Writers
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c8 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers
2019-02-15 20:15:00.070 UTC [orderer/common/broadcast] Handle -> WARN 0c9 [channel: mychannel] Rejecting broadcast of config message from 192.168.176.6:38940 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2019-02-15 20:15:00.070 UTC [orderer/common/server] func1 -> DEBU 0ca Closing Broadcast stream
2019-02-15 20:15:00.072 UTC [grpc] warningf -> DEBU 0cb transport: http2Server.HandleStreams failed to read frame: read tcp 192.168.176.4:7050->192.168.176.6:38940: read: connection reset by peer
2019-02-15 20:15:00.072 UTC [grpc] infof -> DEBU 0cc transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-02-15 20:15:00.073 UTC [common/deliver] Handle -> WARN 0cd Error reading from 192.168.176.6:38938: rpc error: code = Canceled desc = context canceled
2019-02-15 20:15:00.073 UTC [orderer/common/server] func1 -> DEBU 0cf Closing Deliver stream
2019-02-15 20:15:00.073 UTC [grpc] infof -> DEBU 0ce transport: loopyWriter.run returning. connection error: desc = "transport is closing"
The configtx.yaml file:
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &dl
Name: dlMSP
ID: dlMSP
MSPDir: crypto-config/peerOrganizations/dl.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('dlMSP.admin', 'dlMSP.peer', 'dlMSP.client')"
Writers:
Type: Signature
Rule: "OR('dlMSP.admin', 'dlMSP.client')"
Admins:
Type: Signature
Rule: "OR('dlMSP.admin')"
Capabilities:
Channel: &ChannelCapabilities
V1_3: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_3: true
V1_2: false
V1_1: false
Application: &ApplicationDefaults
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
- orderer.example.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- 127.0.0.1:9092
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
Channel: &ChannelDefaults
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ChannelCapabilities
Profiles:
SingleOrgOrdererGenesis:
<<: *ChannelDefaults
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *dl
SingleOrgChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *dl
Capabilities:
<<: *ApplicationCapabilities
The crypto-config.yaml file:
OrdererOrgs:
- Name: orderer
Domain: example.com
Specs:
- Hostname: orderer
PeerOrgs:
- Name: dl
Domain: dl.example.com
EnableNodeOUs: true
Template:
Count: 3 #NUMBER OF PEERS
Users:
Count: 2 #NUMBER OF USERS APART FROM THE ADMIN
The docker-compose-cli.yaml file
version: '2'
volumes:
orderer.example.com:
peer0.dl.example.com:
peer1.dl.example.com:
peer2.dl.example.com:
networks:
v1:
services:
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- v1
peer0.dl.example.com:
container_name: peer0.dl.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.dl.example.com
networks:
- v1
peer1.dl.example.com:
container_name: peer1.dl.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.dl.example.com
networks:
- v1
peer2.dl.example.com:
container_name: peer2.dl.example.com
extends:
file: base/docker-compose-base.yaml
service: peer2.dl.example.com
networks:
- v1
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.dl.example.com:7051
- CORE_PEER_LOCALMSPID=dlMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dl.example.com/users/Admin#dl.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.dl.example.com
- peer1.dl.example.com
- peer2.dl.example.com
networks:
- v1
The docker-compose-base.yaml file:
version: '2'
services:
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer:$IMAGE_TAG
environment:
#- ORDERER_GENERAL_LOGLEVEL=INFO
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
- orderer.example.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
peer0.dl.example.com:
container_name: peer0.dl.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.dl.example.com
- CORE_PEER_ADDRESS=peer0.dl.example.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.dl.example.com:7051
# - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dl.example.com:7051
- CORE_PEER_LOCALMSPID=dlMSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/tls:/etc/hyperledger/fabric/tls
- peer0.dl.example.com:/var/hyperledger/production
ports:
- 7051:7051
- 7053:7053
peer1.dl.example.com:
container_name: peer1.dl.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.dl.example.com
- CORE_PEER_ADDRESS=peer1.dl.example.com:7051
# - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.dl.example.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.dl.example.com:7051
- CORE_PEER_LOCALMSPID=dlMSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer1.dl.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer1.dl.example.com/tls:/etc/hyperledger/fabric/tls
- peer1.dl.example.com:/var/hyperledger/production
ports:
- 8051:7051
- 8053:7053
peer2.dl.example.com:
container_name: peer2.dl.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer2.dl.example.com
- CORE_PEER_ADDRESS=peer2.dl.example.com:7051
# - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dl.example.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.dl.example.com:7051
- CORE_PEER_LOCALMSPID=dlMSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer2.dl.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer2.dl.example.com/tls:/etc/hyperledger/fabric/tls
- peer2.dl.example.com:/var/hyperledger/production
ports:
- 9051:7051
- 9053:7053
Link to my code: https://mega.nz/#F!vJIUWKgZ!hx1geJ916PH0LrKKe5Q0RA!LQRBmITR
try using different names for ChannelID in command (here ChannelID value is byfn-sys-channel and it's so called "system channel"):
../bin/configtxgen -profile TwoOrgsOrdererGenesis -channelID byfn-sys-channel -outputBlock ./channel-artifacts/genesis.block
and all the rest commands with ChannelID (here is DIFFERENT channel with ChannelID mychannel):
export CHANNEL_NAME=mychannel && ../bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID $CHANNEL_NAME
...
export CHANNEL_NAME=mychannel
peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
I've got the same error.
Error: got unexpected status: FORBIDDEN -- implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
After removing everything in docker using this command, the error is sovled.
docker stop $(docker ps -a -q) ; docker rm -f $(docker ps -aq) ; docker system prune -a ; docker volume prune ; docker ps -a ; docker images -a ; docker volume ls
Especially the 'docker volume prune' command is important.
- &dl
Name: dlMSP
ID: dlMSP
MSPDir: crypto-config/peerOrganizations/dl.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('dlMSP.admin', 'dlMSP.peer', 'dlMSP.client')"
Writers:
Type: Signature
Rule: "OR('dlMSP.admin', 'dlMSP.client')"
Admins:
Type: Signature
Rule: "OR('dlMSP.admin')"
It means it will allow only 'dlMSP.admin' members to create channel
Make sure you have enough attrs in your certificate
Below command make sure add admin attribute to the
fabric-ca-client register --id.name admin2 --id.affiliation org1.department1 --id.attrs 'hf.Revoker=true,admin=true:ecert'
In your ./byfn.sh script in genesis block creation you have written this command
echo "##########################################################"
echo "######### Generating Orderer Genesis block ##############"
echo "##########################################################"
configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block -channelID $CHANNEL_NAME
remove -channelID $CHANNEL_NAME from it and ignore this warning
2019-02-24 23:34:25.334 IST [common/tools/configtxgen] main -> WARN 001 Omitting the channel ID for configtxgen for output operations is deprecated. Explicitly passing the channel ID will be required in the future, defaulting to 'testchainid'
It should work now. It did on my system.
Looks like the channel has already been created and you're trying to send a proto file (channel.tx) with same channel identification.
If you're just trying to create a new channel, change the name of the channel and re-create the channel.tx and send the updated config in the cli command.
If you're trying to update the channel config, refer to this document and follow along to get latest config block and make necessary changes to the MSP ID as required.
Remember, Once the channel is created, the orderer only accepts the channel update config envelope to update the channel not the Channel Config file.
If you run into this problem just remove the old containers using
docker volume rm $(docker volume ls -q)
Either the channel is already created or you don't have permission to access it,for that you need to change the permissions.
The simpler solution is to remove all the containers and images and start again. The command is as follows:
docker stop $(docker ps -a -q) //stop all containers first
docker rm -f $(docker ps -aq) // remove all of them
docker system prune -a // remove all stopcontainers
docker volume prune //remove all volumes
Now again, start the network.
I have some organizations with more than 2 peers. When I was editing the docker-compose-base.yaml, I am not sure how to define CORE_PEER_GOSSIP_BOOTSTRAP. Below is what I did, but the log showed that the peer fails to connect to the gossip peers. What is the correct way to do so? Thank you in advance!
docker-compose-base.yaml
peer0.caseManager.snts.com:
container_name: peer0.caseManager.snts.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.caseManager.snts.com
- CORE_PEER_ADDRESS=peer0.caseManager.snts.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=[peer1.caseManager.snts.com:7051 peer2.caseManager.snts.com:7051]
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.caseManager.snts.com:7051
- CORE_PEER_LOCALMSPID=CaseManagerMSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/caseManager.snts.com/peers/peer0.caseManager.snts.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/caseManager.snts.com/peers/peer0.caseManager.snts.com/tls:/etc/hyperledger/fabric/tls
- peer0.caseManager.snts.com:/var/hyperledger/production
ports:
- 9051:7051
- 9053:7053
log of "docker-compose -p docker-compose.yaml up"
peer0.caseManager.snts.com | 2018-11-15 16:21:18.420 UTC [gossip/discovery] func1 -> WARN 023 Could not connect to {peer2.caseManager.snts.com:7051] [] [] peer2.caseManager.snts.com:7051] <nil> <nil>} : context deadline exceeded
peer0.caseManager.snts.com | 2018-11-15 16:21:18.420 UTC [gossip/discovery] func1 -> WARN 024 Could not connect to {[peer1.caseManager.snts.com:7051 [] [] [peer1.caseManager.snts.com:7051 <nil> <nil>} : context deadline exceeded
From a peer's perspective, the Bootstrap peer is another peer from the same Organization, who it can reach out to during bootstrap and get some necessary info to get communication going. (see here)
Your setup looks correct, and its perfectly plausible that your Peer0 started up earlier than Peer1 and Peer2 and was unable to find these during startup, but that's not out of ordinary. Did you end up having any error? If not, this looks like normal operation.
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.caseManager.snts.com:7051 peer2.caseManager.snts.com:7051
I'm trying to test the latest version of Hyperledger Fabric, the v1 in incubation. I have an issue with the latest version of Hyperledger Fabric.
I'm following the instructions here to install Fabric:
https://hyperledger-fabric.readthedocs.io/en/latest/asset_setup/
I'm using Docker to spawn network entities & create/join a channel :
sudo docker --version
Docker version 1.13.1, build 092cba3
sudo docker-compose --version
docker-compose version 1.11.2, build dfed245
When I'm performing :
sudo docker-compose -f docker-compose-gettingstarted.yml up
My five containers are running :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f1b6d6128d43 sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 "sh -c './channel_..." 21 minutes ago Up About a minute cli
8f9df755c160 sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 "peer node start -..." 21 minutes ago Up About a minute 0.0.0.0:8056->7051/tcp peer2
2de6ee624d28 sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 "peer node start -..." 21 minutes ago Up About a minute 0.0.0.0:8055->7051/tcp peer1
31ac53b6e5db sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 "peer node start -..." 21 minutes ago Up About a minute 0.0.0.0:8051->7051/tcp, 0.0.0.0:8053->7053/tcp peer0
d98fc2a8652f sfhackfest22017/fabric-ca:x86_64-0.7.0-snapshot-6294c57 "sh -c 'sleep 10; ..." 21 minutes ago Up About a minute 0.0.0.0:8054->7054/tcp ca
07dcfceb86cc sfhackfest22017/fabric-orderer:x86_64-0.7.0-snapshot-c7b3fe0 "orderer" 21 minutes ago Up About a minute 0.0.0.0:8050->7050/tcp orderer
but I get this error on the last line :
2017-03-01 14:55:32.183 UTC [msp] newIdentity -> INFO 016 Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.270 UTC [peer] GetManagerForChain -> INFO 017 Created new msp manager for chain testchainid
cli | 2017-03-01 14:55:32.270 UTC [msp] Setup -> INFO 018 Setting up the MSP manager (1 msps)
cli | 2017-03-01 14:55:32.270 UTC [msp] Setup -> INFO 019 Setting up MSP
cli | 2017-03-01 14:55:32.270 UTC [msp] NewBccspMsp -> INFO 01a Creating BCCSP-based MSP instance
cli | 2017-03-01 14:55:32.270 UTC [msp] Setup -> INFO 01b Setting up MSP instance DEFAULT
cli | 2017-03-01 14:55:32.270 UTC [msp] newIdentity -> INFO 01c Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.270 UTC [msp] newIdentity -> INFO 01d Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.271 UTC [msp] newIdentity -> INFO 01e Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.271 UTC [msp] newIdentity -> INFO 01f Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.275 UTC [msp] newIdentity -> INFO 020 Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.275 UTC [msp] newIdentity -> INFO 021 Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.275 UTC [msp] newIdentity -> INFO 022 Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.275 UTC [msp] Setup -> INFO 023 MSP manager setup complete, setup 1 msps
cli | 2017-03-01 14:55:32.275 UTC [logging] InitFromViper -> DEBU 024 Setting default logging level to DEBUG for command 'channel'
cli | 2017-03-01 14:55:32.275 UTC [peer] GetLocalMSP -> INFO 025 Returning existing local MSP
cli | 2017-03-01 14:55:32.275 UTC [msp] GetDefaultSigningIdentity -> INFO 026 Obtaining default signing identity
cli | Error: Error getting broadcast client: Error connecting to orderer:7050 due to grpc: timed out when dialing
How can I fix it ?
And when I perform :
sudo docker exec -it cli bash
[sudo] Mot de passe de blockchain :
root#f1b6d6128d43:/opt/gopath/src/github.com/hyperledger/fabric/peer# cat results.txt
ERROR on CHANNEL CREATION
Here is my Docker_compose.yml :
version: '2'
networks:
bridge:
services:
ccenv_latest:
container_name: ccenv_latest
build: ./ccenv
image: hyperledger/fabric-ccenv:latest
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ccenv_snapshot:
container_name: ccenv_snapshot
build: ./ccenv
image: hyperledger/fabric-ccenv:x86_64-0.7.0-snapshot-c7b3fe0
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ca:
image: sfhackfest22017/fabric-ca:x86_64-0.7.0-snapshot-6294c57
ports:
- 8054:7054
environment:
- CA_CERTIFICATE=peerOrg0_cert.pem
- CA_KEY_CERTIFICATE=peerOrg0_pk.pem
volumes:
- ./tmp/ca:/.fabric-ca
command: sh -c 'sleep 10; fabric-ca server start -ca /.fabric-ca/$$CA_CERTIFICATE -ca-key /.fabric-ca/$$CA_KEY_CERTIFICATE -config /etc/hyperledger/fabric-ca/server-config.json -address "0.0.0.0"'
container_name: ca
orderer:
container_name: orderer
image: sfhackfest22017/fabric-orderer:x86_64-0.7.0-snapshot-c7b3fe0
environment:
- ORDERER_GENERAL_LEDGERTYPE=ram
- ORDERER_GENERAL_BATCHTIMEOUT=10s
- ORDERER_GENERAL_BATCHSIZE_MAXMESSAGECOUNT=10
- ORDERER_GENERAL_MAXWINDOWSIZE=1000
- ORDERER_GENERAL_ORDERERTYPE=solo
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_RAMLEDGER_HISTORY_SIZE=100
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 8050:7050
networks:
- bridge
peer0:
container_name: peer0
image: sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_NETWORKID=peer0
- CORE_NEXT=true
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_ID=peer0
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050
- CORE_PEER_GOSSIP_ORGLEADER=true
- CORE_PEER_GOSSIP_IGNORESECURITY=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start --peer-defaultchain=false
ports:
- 8051:7051
- 8053:7053
links:
- orderer:orderer
volumes:
- /var/run/:/host/var/run/
- ./tmp/peer0:/etc/hyperledger/fabric/msp/sampleconfig
networks:
- bridge
peer1:
container_name: peer1
image: sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_NETWORKID=peer0
- CORE_NEXT=true
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_ID=peer1
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050
- CORE_PEER_GOSSIP_ORGLEADER=true
- CORE_PEER_GOSSIP_IGNORESECURITY=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
ports:
- 8055:7051
command: peer node start --peer-defaultchain=false
links:
- orderer:orderer
- peer0:peer0
volumes:
- /var/run/:/host/var/run/
- ./tmp/peer1:/etc/hyperledger/fabric/msp/sampleconfig
networks:
- bridge
peer2:
container_name: peer2
image: sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_NETWORKID=peer0
- CORE_NEXT=true
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_ID=peer2
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050
- CORE_PEER_GOSSIP_ORGLEADER=true
- CORE_PEER_GOSSIP_IGNORESECURITY=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
ports:
- 8056:7051
command: peer node start --peer-defaultchain=false
links:
- orderer:orderer
- peer0:peer0
- peer1:peer1
volumes:
- /var/run/:/host/var/run/
- ./tmp/peer2:/etc/hyperledger/fabric/msp/sampleconfig
networks:
- bridge
cli:
container_name: cli
image: sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_NEXT=true
- CORE_PEER_ID=cli
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050
- CORE_PEER_ADDRESS=peer0:7051
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: sh -c './channel_test.sh; sleep 10000'
# command: /bin/sh
links:
- orderer:orderer
- peer0:peer0
- peer1:peer1
- peer2:peer2
volumes:
- /var/run/:/host/var/run/
#in the "- <HOST>:/opt/gopath/src/github.com/hyperledger/fabric/examples/" mapping below, the HOST part
#should be modified to the path on the host. This will work as is in the Vagrant environment
- ./src/github.com/example_cc/example_cc.go:/opt/gopath/src/github.com/hyperledger/fabric/examples/example_cc.go
- ./tmp/peer3:/etc/hyperledger/fabric/msp/sampleconfig
- ./channel_test.sh:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel_test.sh
networks:
- bridge
And this is my channel_test.sh :
#!/bin/sh
# find address of peer0 in your network
PEER0_IP_ADDRESS=`perl -e 'use Socket; $a = inet_ntoa(inet_aton("peer0")); print "$a\n";'`
# create an anchor file
cat<<EOF>anchorPeer.txt
$PEER0_IP_ADDRESS
7051
-----BEGIN CERTIFICATE-----
MIICjDCCAjKgAwIBAgIUBEVwsSx0TmqdbzNwleNBBzoIT0wwCgYIKoZIzj0EAwIw
fzELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNh
biBGcmFuY2lzY28xHzAdBgNVBAoTFkludGVybmV0IFdpZGdldHMsIEluYy4xDDAK
BgNVBAsTA1dXVzEUMBIGA1UEAxMLZXhhbXBsZS5jb20wHhcNMTYxMTExMTcwNzAw
WhcNMTcxMTExMTcwNzAwWjBjMQswCQYDVQQGEwJVUzEXMBUGA1UECBMOTm9ydGgg
Q2Fyb2xpbmExEDAOBgNVBAcTB1JhbGVpZ2gxGzAZBgNVBAoTEkh5cGVybGVkZ2Vy
IEZhYnJpYzEMMAoGA1UECxMDQ09QMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE
HBuKsAO43hs4JGpFfiGMkB/xsILTsOvmN2WmwpsPHZNL6w8HWe3xCPQtdG/XJJvZ
+C756KEsUBM3yw5PTfku8qOBpzCBpDAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYw
FAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFOFC
dcUZ4es3ltiCgAVDoyLfVpPIMB8GA1UdIwQYMBaAFBdnQj2qnoI/xMUdn1vDmdG1
nEgQMCUGA1UdEQQeMByCCm15aG9zdC5jb22CDnd3dy5teWhvc3QuY29tMAoGCCqG
SM49BAMCA0gAMEUCIDf9Hbl4xn3z4EwNKmilM9lX2Fq4jWpAaRVB97OmVEeyAiEA
25aDPQHGGq2AvhKT0wvt08cX1GTGCIbfmuLpMwKQj38=
-----END CERTIFICATE-----
EOF
#create
echo "Creating channel on Orderer"
CORE_PEER_GOSSIP_IGNORESECURITY=true CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/fabric/msp/sampleconfig CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer channel create -c myc1 -a anchorPeer.txt >>log.txt 2>&1
cat log.txt
grep -q "Exiting" log.txt
if [ $? -ne 0 ]; then
echo "ERROR on CHANNEL CREATION" >> results.txt
exit 1
fi
echo "SUCCESSFUL CHANNEL CREATION" >> results.txt
sleep 5
TOTAL_PEERS=3
i=0
while test $i -lt $TOTAL_PEERS
do
echo "###################################### Joining peer$i"
CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 CORE_PEER_ADDRESS=peer$i:7051 peer channel join -b myc1.block >>log.txt 2>&1
cat log.txt
echo '-------------------------------------------------'
grep -q "Join Result: " log.txt
if [ $? -ne 0 ]; then
echo "ERROR on JOIN CHANNEL" >> results.txt
exit 1
fi
echo "SUCCESSFUL JOIN CHANNEL on PEER$i" >> results.txt
echo "SUCCESSFUL JOIN CHANNEL on PEER$i"
i=$((i+1))
sleep 10
done
echo "Peer0 , Peer1 and Peer2 are added to the channel myc1"
cat log.txt
exit 0
i faced similar problem and found it’s DNS issue then corrected at "hosts" file. the cause is the hostname “orderer". FYR.
What OS you are using? I had the same issue running this getting started guide with VirtualBox running Ubuntu 16.04 LTS on Windows 7. After that I tried installing Ubuntu side-by-side with Windows and it worked from the first shot.
May be because of docker containers are in the different container .
If the new organization in the different network the peers can not communicate with Orderer which is in the other network.
check the networks : docker network ls