Docker mosquitto - Error unable to load auth plugin - docker

I really need your help !
I'm encountering a problem with the loading of a plugin in a docker mosquitto.
I tried to load it on a local version of mosquitto and it worked well.
The error return in the docker console is:
dev_instance_mosquitto_1 exited with code 13
The errors return in the log file of mosquitto are:
1626352342: Loading plugin: /mosquitto/config/mosquitto_message_timestamp.so
1626352342: Error: Unable to load auth plugin "/mosquitto/config/mosquitto_message_timestamp.so".
1626352342: Load error: Error relocating /mosquitto/config/mosquitto_message_timestamp.so: __sprintf_chk: symbol not found
Here is a tree output of the project:
mosquitto/
├── Dockerfile
├── config
│ ├── acl
│ ├── ca_certificates
│ │ ├── README
│ │ ├── broker_CA.crt
│ │ ├── mqtt.test.perax.com.p12
│ │ ├── private_key.key
│ │ └── server_ca.crt
│ ├── certs
│ │ ├── CA_broker_mqtt.crt
│ │ ├── README
│ │ ├── serveur_broker.crt
│ │ └── serveur_broker.key
│ ├── conf.d
│ │ └── default.conf
│ ├── mosquitto.conf
│ ├── mosquitto_message_timestamp.so
│ └── pwfile
├── data
│ └── mosquitto.db
└── log
└── mosquitto.log
Here is the Dockerfile:
FROM eclipse-mosquitto
COPY config/ /mosquitto/config
COPY config/mosquitto_message_timestamp.so /usr/lib/mosquitto_message_timestamp.so
RUN install /usr/lib/mosquitto_message_timestamp.so /mosquitto/config/
here is the docker-compose.yml:
mosquitto:
restart: always
build: ./mosquitto/
image: "eclipse-mosquitto/latests"
ports:
- "1883:1883"
- "9001:9001"
volumes:
- ./mosquitto/config/:/mosquitto/config/
- ./mosquitto/data/:/mosquitto/data/
- ./mosquitto/log/mosquitto.log:/mosquitto/log/mosquitto.log
user: 1883:1883
environment:
- PUID=1883
- PGID=1883
Here is the mosquitto.conf:
persistence true
persistence_location /mosquitto/data
log_dest file /mosquitto/log/mosquitto.log
include_dir /mosquitto/config/conf.d
plugin /mosquitto/config/mosquitto_message_timestamp.so
I'm using mosquitto 2.0.10 on a ubuntu serveur with the version 18.04.5 LTS.
In thanking you for your help.

Your best bet here is probably to set up a multi step Docker build file that uses an Alpine based image to build the plugin then copy it into the eclipse-mosquitto image.

Related

Copy all files of sub and nested sub directories

This is my project file structure:
java-project/
├── docker.compose.yml
├── pom.xml
└── services/
├── a/
│ ├── Dockerfile
│ ├── pom.xml
│ ├── src/
│ │ ├── pom.xml
│ │ ├── xxx
│ │ └── xxx
│ └── target/
│ ├── pom.xml
│ └── xxxx
└── b/
├── Dockerfile
├── pom.xml
├── src/
│ ├── pom.xml
│ ├── xxx
│ └── xxx
└── target/
├── pom.xml
└── xxxx
I want to copy all of the contents of the services folder of the project (including all the subfolders inside the services). Basically, I want to replicate the current project structure with every file and folder in the docker image as well for the mvn build to execute successfully.
I am doing the following in the Dockerfile, but I don't see all of the contents:
COPY services/**/pom.xml ./services/
What am I doing wrong here? TIA
Let's look at your COPY instruction:
# <src> <dest>
COPY services/**/pom.xml ./services/
Under the hood, Docker reads the <src> using Go's filepath.Match method. This means that the instruction doesn't use the globstar (**) the way glob patterns do. However, your question suggests you want to copy everything inside services — not only pom.xml files.
You can copy everything inside your local services directory using:
COPY services ./services/
If you want to exclude certain subdirectories or files, you can specify this using a .dockerignore.

How to create multiple containers in same pods which have separate deployment.yaml files?

tldr: in docker-compose, intercontainer communication is possible via localhost. I want to do the same in k8s, however, I have separate deployment.yaml files for each component. How to link them ?
I have a kubernetes helm package in which there are sub helm packages. The folder structure is as follows ::
A
├── Chart.yaml
├── values.yaml
├── charts
│ ├── component1
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── configmap.yaml
│ │ │ ├── deployment.yaml
│ │ │ ├── hpa.yaml
│ │ │ ├── ingress.yaml
│ │ │ ├── service.yaml
│ │ │ ├── serviceaccount.yaml
│ │ └── values.yaml
│ ├── component2
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── certs.yaml
│ │ │ ├── configmap.yaml
│ │ │ ├── pdb.yaml
│ │ │ ├── role.yaml
│ │ │ ├── statefulset.yaml
│ │ │ ├── pvc.yaml
│ │ │ └── svc.yaml
│ │ ├── values-production.yaml
│ │ └── values.yaml
In docker-compose, I was able to communicate between component1 and component2 via ports using localhost.
However, in this architecture, I have separate deployment.yaml files for those components. I know that if I keep them as containers in a single deployment.yaml file, I can communicate via localhost.
Question: How do I put these containers in same pod, provided that they are present in separate deployment.yaml files ?
That's not possible. Pods are the smallest deployable unit in kubernetes that consist of one or more containers. All containers inside the pod share the same network namespace (beside others). The containers can only be reached via fqdn or ip. For each container outside a pod "localhost" means something completely different. Similar to running docker compose on different hosts, they can not connect using localhost.
You can use the service's name to have a similar behaviour. Instead of calling http://localhost:8080 you can simple use http://component1:8080 to reach component1 from component2, supposing the service in component1/templates/service.yaml is named component1 and both are in the same namespace. Generally there is a dns record for every service with the schema <service>.<namespace>, e.g. component1.default for component1 running in the default namespace. If component2 where in a different namespace you would use http://component1.default:8080.

Docker isn't mounting the directory? "OCI runtime create failed: container_linux.go:346: no such file or directory: unknown"

On my Windows 10 Home computer with Docker Toolbox, Docker is having trouble mounting the drives. I've already run dos2unix on the entrypoint.sh file.
The full error is as such:
ERROR: for users Cannot start service users: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/src/app/entrypoint.sh\": stat /usr/src/app/entrypoint.sh: no such file or directory": unknown
My docker-compose.yml:
version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile
entrypoint: ['/usr/src/app/entrypoint.sh']
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgresql://postgres:postgres#users-db:5432/users_dev
- DATABASE_TEST_URL=postgresql://postgres:postgres#users-db:5432/users_test
depends_on:
- users-db
Curiously, when I comment out the "volumes" section, it works! But I want to be able to mount volumes in the future.
Directory structure can be seen as such:
D:\flask-react-auth
│ .gitignore
│ .gitlab-ci.yml
│ docker-compose.yml
│ README.md
│ release.sh
│
└───services
│
└───users
│ .coveragerc
│ .dockerignore
│ Dockerfile
│ Dockerfile.prod
│ entrypoint.sh
│ manage.py
│ requirements-dev.txt
│ requirements.txt
│ setup.cfg
│ tree.txt
│
└───project
│ config.py
│ __init__.py
│
├───api
│ │ ping.py
│ │ __init__.py
│ │
│ └───users
│ admin.py
│ crud.py
│ models.py
│ views.py
│ __init__.py
│
├───db
│ create.sql
│ Dockerfile
│
└───tests
conftest.py
pytest.ini
test_admin.py
test_config.py
test_ping.py
test_users.py
test_users_unit.py
__init__.py
I have added the D:\flask-react-auth\ to the 'Shared Folders' on virtualbox as well.
The answer seems obvious to me:
When you run the code as is
* it mounts the current working directory to '/usr/src/app'.
* The current working directory does not have a file 'entrypoint.sh'.
* It tries to run '/usr/src/app/entrypoint.sh' but it is not there so it fails.
When you comment out that volume mount
* I assume the image already has '/usr/src/app/entrypoint.sh' so it just works.
I think you probably should change the mounting code from
volumes:
- '.:/usr/src/app'
to
volumes:
- './services/users:/usr/src/app'

Build and Deploy Multiple Docker images to kubernetes

I have an application such as below structure which multiple services has their own Dockerfile.ı would like to deploy my application via Jenkins using Helm to kubernetes but I can not decide what is the best way to handle this?
Should I try to use multi-stage builds if yes how can I handle this?
Should I create two helm charts for each of them or any way to handle this with one helm chart?
└── app-images-dashboard
├── Readme.md
├── cors-proxy
│ ├── Dockerfile
│ ├── lib
│ │ ├── cors-anywhere.js
│ │ ├── help.txt
│ │ ├── rate-limit.js
│ │ └── regexp-top-level-domain.js
│ ├── package.json
│ └── server.js
└── app-images-dashboard
├── Dockerfile
├── components
│ └── image_item.js
├── images
│ └── beta.png
├── index.html
├── main.js
└── stylesheets
└── style.css
A helm chart represent a whole application. You have 1 application with 2 slices. So you need only 1 helm chart.

Unable to connect second org peer to channel in HLF

1 I am following below link to setup my first network on Hyperledger Fabric http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
Whatever steps are mentioned in this setup i am pretty much able to do all.My all docker container working good The issue is than when I try to join other peers of second org. to the channel, using below
"Join peer0.dfarmretail.com to the channel."
docker exec -e "CORE_PEER_LOCALMSPID=DfarmretailMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmretail.com/msp" peer0.dfarmretail.com peer channel join -o orderer.dfarmadmin.com:7050 -b dfarmchannel.block
however I am getting below error
error: error getting endorser client for channel: endorser client failed to connect to peer0.dfarmretail.com:8051: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp 172.20.0.6:8051: connect: connection refused"
Please see below files
my Docker-composer.yaml
version: '2'
networks:
dfarm:
services:
ca.dfarmadmin.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.dfarmadmin.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.dfarmadmin.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/ad62c9f5133ad87c5f94d6b3175eb059395b5f68caf43e439e6bb7d42d8296e4_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/dfarmadmin.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca.dfarmadmin.com
networks:
- dfarm
orderer.dfarmadmin.com:
container_name: orderer.dfarmadmin.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 7050:7050
volumes:
- ./config/:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/dfarmadmin.com/orderers/orderer.dfarmadmin.com/:/etc/hyperledger/msp/orderer
- ./crypto-config/peerOrganizations/dfarmadmin.com/peers/peer0.dfarmadmin.com/:/etc/hyperledger/msp/peerDfarmadmin
networks:
- dfarm
peer0.dfarmadmin.com:
container_name: peer0.dfarmadmin.com
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.dfarmadmin.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=DfarmadminMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.dfarmadmin.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dfarmadmin.com:7051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/dfarmadmin.com/peers/peer0.dfarmadmin.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/dfarmadmin.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.dfarmadmin.com
- couchdb
networks:
- dfarm
peer0.dfarmretail.com:
container_name: peer0.dfarmretail.com
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.dfarmretail.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=DfarmretailMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.dfarmretail.com:8051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dfarmretail.com:8051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 8051:8051
- 8053:8053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/dfarmretail.com/peers/peer0.dfarmretail.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/dfarmretail.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.dfarmadmin.com
- couchdb
networks:
- dfarm
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 5984:5984
networks:
- dfarm
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.dfarmadmin.com:7051
- CORE_PEER_LOCALMSPID=DfarmadminMSP
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dfarmadmin.com/users/Admin#dfarmadmin.com/msp
- CORE_CHAINCODE_KEEPALIVE=10
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
networks:
- dfarm
depends_on:
- orderer.dfarmadmin.com
- peer0.dfarmadmin.com
- peer0.dfarmretail.com
- couchdb
my start.sh
#!/bin/bash
#
# Exit on first error, print all commands.
set -ev
# don't rewrite paths for Windows Git Bash users
export MSYS_NO_PATHCONV=1
FABRIC_START_TIMEOUT=90
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose.yml up -d ca.dfarmadmin.com orderer.dfarmadmin.com peer0.dfarmadmin.com peer0.dfarmretail.com couchdb
# wait for Hyperledger Fabric to start
# incase of errors when running later commands, issue export FABRIC_START_TIMEOUT=<larger number>
export FABRIC_START_TIMEOUT=10
#echo ${FABRIC_START_TIMEOUT}
sleep ${FABRIC_START_TIMEOUT}
# Create the channel
docker exec -e "CORE_PEER_LOCALMSPID=DfarmadminMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmadmin.com/msp" peer0.dfarmadmin.com peer channel create -o orderer.dfarmadmin.com:7050 -c dfarmchannel -f /etc/hyperledger/configtx/channel.tx
# Join peer0.dfarmadmin.com to the channel.
docker exec -e "CORE_PEER_LOCALMSPID=DfarmadminMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmadmin.com/msp" peer0.dfarmadmin.com peer channel join -b dfarmchannel.block
# Join peer0.dfarmretail.com to the channel.
docker exec -e "CORE_PEER_LOCALMSPID=DfarmretailMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmretail.com/msp" peer0.dfarmretail.com peer channel join -o orderer.dfarmadmin.com:7050 -b dfarmchannel.block
its my project folder structure.
├── config
│ ├── channel.tx
│ ├── DfarmadminMSPanchors.tx
│ ├── DfarmretailMSPanchors.tx
│ └── genesis.block
├── configtx.yaml
├── crypto-config
│ ├── 1
│ ├── ordererOrganizations
│ │ └── dfarmadmin.com
│ │ ├── ca
│ │ │ ├── 5f0077f4811e16e3bac8b64ae22e35bd52f3205538587e0a52eaa49e86b57c4c_sk
│ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ ├── orderers
│ │ │ └── orderer.dfarmadmin.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── ecda7305295e86d0890aea73874c80c21a9b29dc04435ef521f1025194a366c8_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── orderer.dfarmadmin.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── server.crt
│ │ │ └── server.key
│ │ ├── tlsca
│ │ │ ├── 199db47c8e231c6cff329e1fdfa8b522ef7b74847808f61045057b56498f49fd_sk
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── users
│ │ └── Admin#dfarmadmin.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 0c5004c87035e89b735940b5b446d59d138c1af8f42b73980c7d7b03373ee333_sk
│ │ │ ├── signcerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── peerOrganizations
│ ├── dfarmadmin.com
│ │ ├── ca
│ │ │ ├── ad62c9f5133ad87c5f94d6b3175eb059395b5f68caf43e439e6bb7d42d8296e4_sk
│ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ ├── peers
│ │ │ ├── peer0.dfarmadmin.com
│ │ │ │ ├── msp
│ │ │ │ │ ├── admincerts
│ │ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ │ ├── cacerts
│ │ │ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ │ │ ├── keystore
│ │ │ │ │ │ └── 66f1271392ea3ce4d3548e91ee5620591e79e538e36a69b38786b3f11f3c53e2_sk
│ │ │ │ │ ├── signcerts
│ │ │ │ │ │ └── peer0.dfarmadmin.com-cert.pem
│ │ │ │ │ └── tlscacerts
│ │ │ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ │ │ └── tls
│ │ │ │ ├── ca.crt
│ │ │ │ ├── server.crt
│ │ │ │ └── server.key
│ │ │ └── peer0.dfarmretail.com
│ │ │ └── msp
│ │ │ └── keystore
│ │ ├── tlsca
│ │ │ ├── f6f49b0ff9c7f850e5f655dfbb88ce7b8c07f3f872d151346ac65c6f5f2ef80d_sk
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── users
│ │ ├── Admin#dfarmadmin.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── 9c65737a78159bf977b9e38299c9c8e02278f76c3d4650caf32a4da845947547_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── client.crt
│ │ │ └── client.key
│ │ └── User1#dfarmadmin.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── User1#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 458f1f699493828d88507fabb9ad2dab4fc2cc8acdaf4aa65c1fda12710227dd_sk
│ │ │ ├── signcerts
│ │ │ │ └── User1#dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── dfarmretail.com
│ ├── ca
│ │ ├── 8f839598652d94f6ab6cb3d0f15390df5fe8dd7b6bb88c5c3b75205b975bc8d2_sk
│ │ └── ca.dfarmretail.com-cert.pem
│ ├── msp
│ │ ├── admincerts
│ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ ├── cacerts
│ │ │ └── ca.dfarmretail.com-cert.pem
│ │ └── tlscacerts
│ │ └── tlsca.dfarmretail.com-cert.pem
│ ├── peers
│ │ └── peer0.dfarmretail.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmretail.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 2115fb2c52372041918517c2dcef91cb7cc66ca4a987a1606a98e9b75d78ab91_sk
│ │ │ ├── signcerts
│ │ │ │ └── peer0.dfarmretail.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmretail.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── server.crt
│ │ └── server.key
│ ├── tlsca
│ │ ├── 8b26e70a303598e0012852426ac93be726210c5911baf4695785cf595bad3041_sk
│ │ └── tlsca.dfarmretail.com-cert.pem
│ └── users
│ ├── Admin#dfarmretail.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmretail.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 7ac01c0d8b0b4f3245d1e68fe34d34a2e1727059c459c1418b68b66870328eb2_sk
│ │ │ ├── signcerts
│ │ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmretail.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── User1#dfarmretail.com
│ ├── msp
│ │ ├── admincerts
│ │ │ └── User1#dfarmretail.com-cert.pem
│ │ ├── cacerts
│ │ │ └── ca.dfarmretail.com-cert.pem
│ │ ├── keystore
│ │ │ └── e40665832cc9d4fce41f72b04505655f9eb46e3b704547987f03863de37331b5_sk
│ │ ├── signcerts
│ │ │ └── User1#dfarmretail.com-cert.pem
│ │ └── tlscacerts
│ │ └── tlsca.dfarmretail.com-cert.pem
│ └── tls
│ ├── ca.crt
│ ├── client.crt
│ └── client.key
├── crypto-config.yaml
├── docker-compose.yml
├── generate.sh
├── init.sh
├── README.md
├── start.sh
├── stop.sh
└── teardown.sh
docker logs for dfarmretail container
docker logs orderer.dfarmadmin.com
I have tried lot to rectify the issue however I am unable to so could you please help on this
thanks in advance
Is your peer for peer0.dfarmretail.com running OK? (I would check the log for it) In your docker compose file you are configuring both your peers to use the same CouchDB container - but you need to configure a separate CouchDB for each peer. The retail peer maybe failing because of some problem with the CouchDB container already being allocated to the admin peer. The 2nd CouchDB container will have to use a different port, and the retail peer will have to be changed to connect to that new port.
I notice that you are exporting port 7053 on you peer. Port 7053 was used on 'earlier' versions of Fabric for the eventhub I think - what version of Fabric are you using?
You don't have to use CouchDB for your peers, but if you configure your peers to use CouchDB ( CORE_LEDGER_STATE_STATEDATABASE=CouchDB ) then you need a separate CouchDB container for each.
Following updates to the question and comment:
The original error shows a "connection refused" but from the log it looks like the peer is still running. So it looks like some networking error - there is also a line in the dfarmretail peer log showing that the chaincode listen address is using port 7052, whereas I think it should have 8052.
I suggest you add these 2 config lines to the dfarmadmin peer in the docker compose file:
- CORE_PEER_LISTENADDRESS=0.0.0.0:7051
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
and for dfarmretail peer:
- CORE_PEER_LISTENADDRESS=0.0.0.0:8051
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:8052
This should clear any port ambiguation and make the peers listen on all interfaces.
You could try the free tool from www.chaincoder.org which will generate all config files for you and let you easily code and deploy chaincodes on peers. Follow here Chaincoder

Resources