Hyperledger fabric: error validating DeltaSet while creating channel - hyperledger

I need to create 3 channels under 3 orgs.
channelAll : Org1,Org2
channelOrg1 : Org1
channelOrg2 : Org2
However, I have successfully created the first and second channel but for the 3rd, the error happens as below.
Error: got unexpected status: BAD_REQUEST -- error authorizing update:
error validating DeltaSet: policy for [Group] /Channel/Application
not satisfied: Failed to reach implicit threshold of 1 sub-policies,
required 1 remaining
root#871fcf2002f9:/opt/gopath/src/github.com/hyperledger/fabric/peer#
Do you have any ideas to resolve? here's configtx.yaml.
Profiles:
TwoOrgsOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
LCFNConsortium:
Organizations:
- *Org1
- *Org2
pfAllChannel:
Consortium: LCFNConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
Capabilities:
<<: *ApplicationCapabilities
pfOrg1Channel:
Consortium: LCFNConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
Capabilities:
<<: *ApplicationCapabilities
pfOrg2Channel:
Consortium: LCFNConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org2
Capabilities:
<<: *ApplicationCapabilities

This means that orderer is not getting signature of desired organization MSP. You can verify it in orderer logs.

Make sure that the MSPs that you export as env CORE_PEER_LOCALMSPID before signing your envelope are correct. If your MSP is org1.example.comMSP then Org1.example.comMSP won't work.

Related

Hyperledger Fabric Service Endpoint Error While Trying to Connect Organization3 From Client

I am using Hyperledger Fabric's test network.
When I try to login Admin user with Org1 or Org2 from client side of my project, I don't have any problem. But when I try to login with Org 3 it gives these errors:
"2022-12-20T12:33:07.185Z - info: [NetworkConfig] : buildPeer - Unable to connect to the endorser peer0.org3 .example.com due to Error : Failed to connect before the deadline on Endorser- name: peer0.org3 .example.com, url:grpcs ://localhost:11051, connected:false, connectAttempted: true"
at checkState (/Users/ecem/hyperledger fab/fa bric-samples/Hyperledger -Fabric_Supplychain/a pi/node_modules/#grpc/grpc-js/build/src/client. js:77:26) at Timeout._ onTimeout (/Users/ecem/hyperledger fab/fa bric-samples/Hyperledger -Fabric_Supplychain/a pi/node_modules/#grpc/grpc-js/build/src/channel. js
:525:17)
at listOnTimeout (node:internal/timers :559:17) at processTimers (node:internal/timers :502:7) {
connectFailed: true
}
"2022-12-20T12:33:10.197Z - error : [ServiceEnd point]: Error : Failed to connect before the deadline on Discoverer- name: peer0.org3 .example.com, url:grpc s://localhost:11051, connected:false, connectAttempted: true"
2022-12-20T12:33:10.198Z - error : [ServiceEnd point]: waitForReady - Failed to connect to remote gRPC server peer0.org3 .example.com url:grpcs ://localhos t:11051 timeout:3000
"2022-12-20T12:33:10.198Z - error : [ServiceEnd point]: ServiceEnd point grpcs ://localhost:11051 reset connection failed :: Error : Failed to connect before the deadline on Discoverer- name: peer0.org3 .example.com, url:grpcs ://localhost:11051, connected:false, connectAttempted: true"
2022-12-20T12:33:10.198Z - error : [DiscoveryServiceJ : send [mychannel] - no discovery results GET /users/login?username=admin &password=adminpw &organization=org3 304 6108.941 ms - -
I created organization 3 in the test network by using following commands respectively:
cd hyperledgerfab/fabric-samples/test-network
./network.sh down
./network.sh up createChannel -ca -s couchdb
cd addOrg3
./addOrg3.sh up -c mychannel -ca -s couchdb
export FABRIC_CFG_PATH=$PWD
../../bin/configtxgen -printOrg Org3MSP > ../organizations/peerOrganizations/org3.example.com/org3.json
export DOCKER_DEFAULT_PLATFORM=linux/amd64
docker-compose -f compose/docker/docker-compose-org3.yaml up -d
cd ..
./network.sh deployCC -ccn SupplychainContract -ccp ../HyperLedger-Fabric_Supplychain/chaincode/ -ccl javascript -ccep "OR('Org1MSP.peer','Org2MSP.peer','Org3MSP.peer')"
export PATH=${PWD}/../bin:$PATH
export FABRIC_CFG_PATH=$PWD/../config/
export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
export CORE_PEER_ADDRESS=localhost:7051
When I run ./addOrg3.sh up -c mychannel -ca -s couchdb, there is no problem :
2022-12-20 20:20:56.173 UTC 0001 INFO [channelCmd] InitCmdFactory -> Endorser and orderer connections initialized
2022-12-20 20:20:56.263 UTC 0002 INFO [channelCmd] update -> Successfully submitted channel update
Anchor peer set for org 'Org3MSP' on channel 'mychannel'
Channel 'mychannel' joined
Org3 peer successfully added to network
When I run the docker-compose -f compose/docker/docker-compose-org3.yaml up -d command it seems like there is no problem:
ecem#Ecems-MacBook-Air addOrg3 % docker-compose -f compose/docker/docker-compose-org3.yaml up -d
[+] Running 1/0
⠿ Container peer0.org3.example.com Running
I don't have any errors while adding admins to all three organizations :
ecem#Ecems-MacBook-Air api % node enrollAdmin.js org1 admin adminpw
0: /Users/ecem/.nvm/versions/node/v16.18.1/bin/node
1: /Users/ecem/hyperledgerfab/fabric-samples/HyperLedger-Fabric_Supplychain/api/enrollAdmin.js
2: org1
3: admin
4: adminpw
Wallet path: /Users/ecem/hyperledgerfab/fabric-samples/HyperLedger-Fabric_Supplychain/api/wallet/org1
Test1
Test2
Test3
Successfully enrolled admin user "admin" and imported it into the wallet
ecem#Ecems-MacBook-Air api % node enrollAdmin.js org2 admin adminpw
0: /Users/ecem/.nvm/versions/node/v16.18.1/bin/node
1: /Users/ecem/hyperledgerfab/fabric-samples/HyperLedger-Fabric_Supplychain/api/enrollAdmin.js
2: org2
3: admin
4: adminpw
Wallet path: /Users/ecem/hyperledgerfab/fabric-samples/HyperLedger-Fabric_Supplychain/api/wallet/org2
Test1
Test2
Test3
Successfully enrolled admin user "admin" and imported it into the wallet
ecem#Ecems-MacBook-Air api % node enrollAdmin.js org3 admin adminpw
0: /Users/ecem/.nvm/versions/node/v16.18.1/bin/node
1: /Users/ecem/hyperledgerfab/fabric-samples/HyperLedger-Fabric_Supplychain/api/enrollAdmin.js
2: org3
3: admin
4: adminpw
Wallet path: /Users/ecem/hyperledgerfab/fabric-samples/HyperLedger-Fabric_Supplychain/api/wallet/org3
Test1
Test2
Test3
Successfully enrolled admin user "admin" and imported it into the wallet
This is my docker-compose-org3.yaml file :
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
peer0.org3.example.com:
networks:
test:
name: fabric_test
services:
peer0.org3.example.com:
container_name: peer0.org3.example.com
image: hyperledger/fabric-peer:latest
environment:
- DOCKER_DEFAULT_PLATFORM=linux/amd64
#Generic peer variables
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric_test
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
# Peer specific variabes
- CORE_PEER_ID=peer0.org3.example.com
- CORE_PEER_ADDRESS=peer0.org3.example.com:11051
- CORE_PEER_LISTENADDRESS=0.0.0.0:11051
- CORE_PEER_CHAINCODEADDRESS=peer0.org3.example.com:11052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:11052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org3.example.com:11051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org3.example.com:11051
- CORE_PEER_LOCALMSPID=Org3MSP
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- ../../organizations/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/msp:/etc/hyperledger/fabric/msp
- ../../organizations/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls:/etc/hyperledger/fabric/tls
- peer0.org3.example.com:/var/hyperledger/production
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- 11051:11051
networks:
- test

Multiple Database Setup Github Actions

I have defined a secondary external database in my Rails application for read-only purposes. Thus for testing, I setup a local database and plan to mock data within my test examples. Connecting to the database and running tests locally work great. However, when running the CI tests, the secondary database fails to setup due to the following error:
I believe this to be a configuration setup issue within the ci.yml file, and am not sure how to configure this properly.
# ci.yml
name: Continuous Integration
on:
pull_request:
branches: [ main ]
jobs:
test:
name: Testing
runs-on: ubuntu-latest
services:
postgres:
image: postgres:12
ports:
- "5432:5432"
env:
POSTGRES_USER: rails
POSTGRES_PASSWORD: password
env:
RAILS_ENV: test
RAILS_MASTER_KEY: ${{ secrets.RAILS_MASTER_KEY }}
steps:
- name: Checkout code
uses: actions/checkout#v3
- name: Set up Chromedriver
uses: nanasess/setup-chromedriver#v1
# with:
# Optional: do not specify to match Chrome's version
# chromedriver-version: '88.0.4324.96'
# Add or replace dependency steps here
- name: Install Ruby and gems
uses: ruby/setup-ruby#1a68550f2e3309e13c8ccb91ac6b8786f59ee147
with:
bundler-cache: true
# Add or replace database setup steps here
- name: Set up primary database
env:
POSTGRES_DB: calendarize_test
DATABASE_URL: "postgres://rails:password#localhost:5432/calendarize_test"
run: bin/rails db:create:primary db:migrate:primary
- name: Set up warehouse database
env:
POSTGRES_DB: warehouse_test
DATABASE_URL: "postgres://rails:password#localhost:5432/warehouse_test"
run: bin/rails db:create:warehouse db:migrate:warehouse
# Add or replace test runners here
- name: Start Chromedriver
run: |
export DISPLAY=:99
chromedriver --url-base=/wd/hub --disable-dev-shm-usage &
sudo Xvfb -ac :99 -screen 0 1280x1024x24 > /dev/null 2>&1 & # optional
- name: Run tests
run: bundle exec rspec --color
# database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
calendarize: &calendarize
<<: *default
host: localhost
username: <%= ENV["CALENDARIZE_DATABASE_USERNAME"] %>
password: <%= ENV["CALENDARIZE_DATABASE_PASSWORD"] %>
test:
primary:
<<: *calendarize
database: calendarize_test
warehouse:
<<: *calendarize
database: warehouse_test
migrations_paths: db/warehouse_migrate
development:
primary:
<<: *calendarize
database: calendarize_development
warehouse:
<<: *calendarize
database: warehouse_development
migrations_paths: db/warehouse_migrate
production:
primary:
<<: *calendarize
database: <%= ENV["CALENDARIZE_DATABASE_NAME"] %>
warehouse:
<<: *default
url: <%= ENV["WAREHOUSE_DATABASE_URL"] %>
database_tasks: false
I think i encountered the same issue when i run tests in circleci while supporting multiple database. Because circleci assumes you will only need 1 database. In my case i was using mysql but i think same approach will also help you too. First i installed the mysql client in order to make sql queries from the image that we are on in circleci. Then i gave permissions to the user that i have in my database.yml and rest is just rails commands. Hope the code below demosntrate it better.
version: 2.1
jobs:
test:
parallelism: 3
docker:
- image: cimg/ruby:3.0.3
- image: cimg/redis:6.2.6
- image: cimg/mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: rootpw
MYSQL_USER: myuser
MYSQL_PASSWORD: wq1234
steps:
- checkout
- run:
name: Waiting for MySQL to be ready
command: |
for i in `seq 1 10`;
do
nc -z 127.0.0.1 3306 && echo Success && exit 0
echo -n .
sleep 1
done
echo Failed waiting for MySQL && exit 1
- run: sudo apt-get update
- run: sudo apt-get install qt5-default libqt5webkit5-dev gstreamer1.0-plugins-base gstreamer1.0-tools gstreamer1.0-x
- run: bundle install
- run: sudo apt-get install mysql-client
- run: mysql -u root -prootpw -e "GRANT ALL PRIVILEGES ON *.* TO 'myuser'#'%' WITH GRANT OPTION" --protocol=tcp
- run: rails db:create
- run: rails db:schema:load --trace
- run:
name: Test
command: rspec
The problem was solved by matching the username/password fields between both ci.yml and database.yml.
# database.yml
default: &default
adapter: postgresql
encoding: unicode
host: localhost
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
test:
primary:
<<: *default
database: calendarize_test
username: postgres
password: postgres
warehouse:
<<: *default
database: warehouse_test
username: postgres
password: postgres
migrations_paths: db/warehouse_migrate
development:
primary:
<<: *default
database: calendarize_development
warehouse:
<<: *default
database: warehouse_development
migrations_paths: db/warehouse_migrate
production:
primary:
<<: *default
database: <%= ENV["CALENDARIZE_DATABASE_NAME"] %>
username: <%= ENV["CALENDARIZE_DATABASE_USERNAME"] %>
password: <%= ENV["CALENDARIZE_DATABASE_PASSWORD"] %>
warehouse:
<<: *default
url: <%= ENV["WAREHOUSE_DATABASE_URL"] %>
database_tasks: false
# ci.yml
name: Continuous Integration
on:
pull_request:
branches: [ main ]
jobs:
test:
name: Testing
runs-on: ubuntu-latest
services:
postgres:
image: postgres
ports:
- "5432:5432"
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
env:
RAILS_ENV: test
RAILS_MASTER_KEY: ${{ secrets.RAILS_MASTER_KEY }}
steps:
- name: Checkout code
uses: actions/checkout#v3
- name: Set up Chromedriver
uses: nanasess/setup-chromedriver#v1
# with:
# Optional: do not specify to match Chrome's version
# chromedriver-version: '88.0.4324.96'
# Add or replace dependency steps here
- name: Install Ruby and gems
uses: ruby/setup-ruby#1a68550f2e3309e13c8ccb91ac6b8786f59ee147
with:
bundler-cache: true
# Add or replace database setup steps here
- name: Set up primary database
env:
POSTGRES_DB: calendarize_test
DATABASE_URL: "postgresql://postgres:postgres#localhost:5432/calendarize_test"
run: bin/rails db:create:primary db:migrate:primary
- name: Set up warehouse database
env:
POSTGRES_DB: warehouse_test
DATABASE_URL: "postgresql://postgres:postgres#localhost:5432/warehouse_test"
run: bin/rails db:create:warehouse db:migrate:warehouse
# Add or replace test runners here
- name: Start Chromedriver
run: |
export DISPLAY=:99
chromedriver --url-base=/wd/hub --disable-dev-shm-usage &
sudo Xvfb -ac :99 -screen 0 1280x1024x24 > /dev/null 2>&1 & # optional
- name: Run tests
env:
PG_USER: postgres
PG_PASSWORD: postgres
run: bundle exec rspec --color

Promtail: Loki Server returned HTTP status 429 Too Many Requests

I'm running Loki for test purposes in Docker and am recently getting following error from the Promtail and Loki containers:
level=warn ts=2022-02-18T09:41:39.186511145Z caller=client.go:349 component=client host=loki:3100 msg="error sending batch, will retry" status=429 error="server returned HTTP status 429 Too Many Requests (429): Maximum active stream limit exceeded, reduce the number of active streams (reduce labels or reduce label values), or contact your Loki administrator to see if the limit can be increased"
I have tried increasing limit settings (ingestion_rate_mb and ingestion_burst_size_mb) in my Loki config.
I setup two Promtail jobs - one job ingesting MS Exchange logs from a local directory (currently 8TB and increasing), the other job gets logs spooled from syslog-ng.
I've read that reducing labels help. But I'm only using two labels.
Configuration
Below my config files (docker-compose, loki, promtail):
docker-compose.yaml
version: "3"
networks:
loki:
services:
loki:
image: grafana/loki:2.4.2
container_name: loki
restart: always
user: "10001:10001"
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- ${DATADIR}/loki/etc:/etc/loki:rw
- ${DATADIR}/loki/chunks:/loki/chunks
networks:
- loki
promtail:
image: grafana/promtail:2.4.2
container_name: promtail
restart: always
volumes:
- /var/log/loki:/var/log/loki
- ${DATADIR}/promtail/etc:/etc/promtail
ports:
- "1514:1514" # for syslog-ng
- "9080:9080" # for http web interface
command: -config.file=/etc/promtail/config.yml
networks:
- loki
grafana:
image: grafana/grafana:8.3.4
container_name: grafana
restart: always
user: "476:0"
volumes:
- ${DATADIR}/grafana/var:/var/lib/grafana
ports:
- "3000:3000"
networks:
- loki
Loki Config
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
# https://grafana.com/docs/loki/latest/configuration/#limits_config
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_rate_mb: 12
ingestion_burst_size_mb: 24
per_stream_rate_limit: 24MB
chunk_store_config:
max_look_back_period: 336h
table_manager:
retention_deletes_enabled: true
retention_period: 2190h
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_encoding: snappy
Promtail Config
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: exchange
static_configs:
- targets:
- localhost
labels:
job: exchange
__path__: /var/log/loki/exchange/*/*/*log
- job_name: syslog-ng
syslog:
listen_address: 0.0.0.0:1514
idle_timeout: 60s
label_structured_data: yes
labels:
job: "syslog-ng"
relabel_configs:
- source_labels: ['__syslog_message_hostname']
target_label: 'host'

Shinyproxy error 500 : Failed to start container / Caused by: java.io.IOException: Permission denied

The shinyproxy page is displayed and after authentication I can see the nav bar, 2 links to the 2 applications. Then, when I click on one of them, I got en error 500 / "Failed to start container"
In the stack, I can see :
Caused by: java.io.IOException: Permission denied
Here is my configuration
application.yml:
proxy:
title: Open Analytics Shiny Proxy
# landing-page: /
port: 8080
authentication: simple
admin-groups: scientists
# Example: 'simple' authentication configuration
users:
- name: jack
password: password
groups: scientists
- name: jeff
password: password
groups: mathematicians
# Example: 'ldap' authentication configuration
# Docker configuration
#docker:
#cert-path: /home/none
#url: http://localhost:2375
#port-range-start: 20000
specs:strong text
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
access-groups: [scientists, mathematicians]
- id: 06_tabsets
container-cmd: ["R", "-e", "shinyproxy::run_06_tabsets()"]
container-image: openanalytics/shinyproxy-demo
access-groups: scientists
logging:
file:
shinyproxy.log
shinyproxy-docker-compose.yml:
version: '2.4'
services:
shinyproxy:
container_name: shinyproxy
image: openanalytics/shinyproxy:2.3.1
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./application.yml:/opt/shinyproxy/application.yml
privileged: true
ports:
- 35624:8080
I have the same problem, workaround
sudo chown $USER:docker /run/docker.sock
However, I do not understand why this is needed, because /run/docker.sock was already root:docker.
This is under WSL2.

Hyperledger fabric multihost setup for first-network example

I am trying to setup first-network example on a Multihost environment using docker swarm with below configuration to begin with:
HOST1
Orderer
Org1-pee0
Org1-peer1
CLI
HOST2
Org2-pee0
Org2-pee1
I have only changed the docker-compose-cli.yaml to make it compatible with swarm(code given below). I am not able to add the Host2 / Org2 peers to channel.
Executing the below steps in order:
byfn -m generate
docker stack deploy --compose-file docker-compose-cli.yaml overnet
Enter the CLI docker and execute ./scripts/script.sh mychannel
I keep getting the below error
2017-08-15 02:42:49.512 UTC [msp] GetDefaultSigningIdentity -> DEBU 006 Obtaining default signing identity
Error: Error getting endorser client channel: PER:404 - Error trying to connect to local peer
/opt/gopath/src/github.com/hyperledger/fabric/peer/common/common.go:116 github.com/hyperledger/fabric/peer/common.GetEndorserClient
/opt/gopath/src/github.com/hyperledger/fabric/peer/channel/channel.go:149 github.com/hyperledger/fabric/peer/channel.InitCmdFactory
/opt/gopath/src/github.com/hyperledger/fabric/peer/channel/join.go:138 github.com/hyperledger/fabric/peer/channel.join
/opt/gopath/src/github.com/hyperledger/fabric/peer/channel/join.go:42 github.com/hyperledger/fabric/peer/channel.joinCmd.func1
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:599 github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).execute
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:689 github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:648 github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).Execute
/opt/gopath/src/github.com/hyperledger/fabric/peer/main.go:118 main.main
/opt/go/src/runtime/proc.go:192 runtime.main
/opt/go/src/runtime/asm_amd64.s:2087 runtime.goexit
Caused by: x509: certificate is valid for peer0.org1.example.com, peer0, not peer0.org2.example.com
docker-compose-cli.yaml
Orderer
version: '3'
networks:
overnet:
services:
orderer_example_com:
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 7050:7050
# - 7049:7049
networks:
- overnet
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
Org1 Peers
peer0_org1_example_com:
image: hyperledger/fabric-peer
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 7051:7051
- 7053:7053
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=overnet
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
networks:
- overnet
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
Org2 Peers
peer0_org2_example_com:
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=overnet
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_PEER_ID=peer0.org2.example.com
- CORE_PEER_ADDRESS=peer0.org2.example.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 9051:7051
- 9053:7053
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
networks:
- overnet
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == worker]
CLI
cli:
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org4.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
# command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'
volumes:
- /var/run/:/host/var/run/
- ./chaincode/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer_example_com
- peer0_org1_example_com
- peer1_org1_example_com
- peer0_org2_example_com
- peer1_org2_example_com
networks:
- overnet
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
crypto-config.yaml (Did not make any changes this file, however attaching here for reference)
OrdererOrgs:
# ------------------------------------------------------------------
# Orderer
# ------------------------------------------------------------------
- Name: Orderer
Domain: example.com
# ----------------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# ----------------------------------------------------------------
Specs:
- Hostname: orderer
# --------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# --------------------------------------------------------------------
PeerOrgs:
# ------------------------------------------------------------------
# Org1
# ------------------------------------------------------------------
- Name: Org1
Domain: org1.example.com
# ----------------------------------------------------------------
# "Specs"
# ----------------------------------------------------------------
# Uncomment this section to enable the explicit definition of hosts in your
# configuration. Most users will want to use Template, below
#
# Specs is an array of Spec entries. Each Spec entry consists of two fields:
# - Hostname: (Required) The desired hostname, sans the domain.
# - CommonName: (Optional) Specifies the template or explicit override for
# the CN. By default, this is the template:
#
# "{{.Hostname}}.{{.Domain}}"
#
# which obtains its values from the Spec.Hostname and
# Org.Domain, respectively.
# ---------------------------------------------------------------------------
# Specs:
# - Hostname: foo # implicitly "foo.org2.example.com"
# CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
# - Hostname: bar
# - Hostname: baz
# ---------------------------------------------------------------------------
# "Template"
# ---------------------------------------------------------------------------
# Allows for the definition of 1 or more hosts that are created sequentially
# from a template. By default, this looks like "peer%d" from 0 to Count-1.
# You may override the number of nodes (Count), the starting index (Start)
# or the template used to construct the name (Hostname).
#
# Note: Template and Specs are not mutually exclusive. You may define both
# sections and the aggregate nodes will be created for you. Take care with
# name collisions
# ---------------------------------------------------------------------------
Template:
Count: 2
# Start: 5
# Hostname: {{.Prefix}}{{.Index}} # default
# ---------------------------------------------------------------------------
# "Users"
# ---------------------------------------------------------------------------
# Count: The number of user accounts _in addition_ to Admin
# ---------------------------------------------------------------------------
Users:
Count: 1
# ------------------------------------------------------------------
# Org2: See "Org1" for full specification
# ------------------------------------------------------------------
- Name: Org2
Domain: org2.example.com
Template:
Count: 2
Users:
Count: 1
I was able to host hyperledger fabric network on multiple machines using docker swarm mode. Swarm mode provides a network across multiple hosts/machines for the communication of the fabric network components.
This post explains the deployment process.It creates a swarm network and all the other machines join the network. https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f
I have set up the mutlihost setup of fabric network. My orderer and one peer is on one host and one peer is on 2nd host. For this we need to make changed in configtx.yml file for orderer section:
Profiles:
CommonOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortiumJA:
Organizations:
- *test
- *mch
- *test2
- *test3
CommonOrgChannel:
Consortium: SampleConsortiumJA
Application:
<<: *ApplicationDefaults
Organizations:
- *test
- *mch
- *test2
- *test3
MJAOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortiumJA:
Organizations:
- *test
- *mch
- *test2
MJAOrgChannel:
Consortium: SampleConsortiumJA
Application:
<<: *ApplicationDefaults
Organizations:
- *test
- *mch
- *test2
MABOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortiumAB:
Organizations:
- *test2
- *mch
- *test3
MABOrgChannel:
Consortium: SampleConsortiumAB
Application:
<<: *ApplicationDefaults
Organizations:
- *test
- *mch
- *test3
MBJOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortiumBJ:
Organizations:
- *test3
- *mch
- *test
MBJOrgChannel:
Consortium: SampleConsortiumBJ
Application:
<<: *ApplicationDefaults
Organizations:
- *test3
- *mch
- *test
Organizations:
- &OrdererOrg
Name: OrdererOrg
# ID to load the MSP definition as
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/mch.test/msp
- &test
Name: test
# ID to load the MSP definition as
ID: testMSP
MSPDir: crypto-config/peerOrganizations/test.test/msp
AnchorPeers:
- Host: peer0.test.test
Port: 7054
- &airtel
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: airtel
# ID to load the MSP definition as
ID: test2MSP
MSPDir: crypto-config/peerOrganizations/test2.test/msp
Anc
- Host: peer0.test2.test
Port: 7055
- &bsnl
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: test3
# ID to load the MSP definition as
ID: test3MSP
MSPDir: crypto-config/peerOrganizations/test3.test/msp
AnchorPeers:
- Host: peer0.test3.test
Port: 7059
- &mch
Name: mch
# ID to load the MSP definition as
ID: mchMSP
MSPDir: crypto-config/peerOrganizations/mch.test/msp
AnchorPeers:
- Host: peer0.mch.test
Port: 7051
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
- 10.64.253.213:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- 127.0.0.1:9092
Organizations:
Application: &ApplicationDefaults
Organizations:
===============================================================
after this pull up the orderer and peer1 on one server and peer2 on different server. Create channel using IP of orderer instead of name then copy the channel file to other peer also and join both peers one at a time. Install chaincode on two peers. You are good to go.
You have to use Docker-swarm to implement MultiHost Hyperledger fabric Blockchain Network.
Read the steps from the Following URL.
https://github.com/chudsonsolomon/Block-chain-Swarm-Multi-Host
First of all, I think that you don't have to
Enter the CLI docker and execute ./scripts/script.sh mychannel
Or have you commented the docker compose file like is described in the "Start the network" step?
On the other hand, I tell you that I have achieved to setting up a Multihost environment using docker. However, instead of defining the network overlay, I defined the network_mode: host for all the docker containers that I'm going to start up.
Could you show the logs that are appearing in the Peer and in the Orderer?
I guess the problem came from the service name of docker-compose:
orderer_example_com => orderer.example.com
peer0_org1_example_com => peer0.org1.example.com
...
Let use dot (.) not underscore (_) for naming.
Read wikipedia for more
You also need docker-swarm for multiple host setup

Resources