Docker isn't mounting the directory? "OCI runtime create failed: container_linux.go:346: no such file or directory: unknown" - docker

On my Windows 10 Home computer with Docker Toolbox, Docker is having trouble mounting the drives. I've already run dos2unix on the entrypoint.sh file.
The full error is as such:
ERROR: for users Cannot start service users: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/src/app/entrypoint.sh\": stat /usr/src/app/entrypoint.sh: no such file or directory": unknown
My docker-compose.yml:
version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile
entrypoint: ['/usr/src/app/entrypoint.sh']
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgresql://postgres:postgres#users-db:5432/users_dev
- DATABASE_TEST_URL=postgresql://postgres:postgres#users-db:5432/users_test
depends_on:
- users-db
Curiously, when I comment out the "volumes" section, it works! But I want to be able to mount volumes in the future.
Directory structure can be seen as such:
D:\flask-react-auth
│ .gitignore
│ .gitlab-ci.yml
│ docker-compose.yml
│ README.md
│ release.sh
│
└───services
│
└───users
│ .coveragerc
│ .dockerignore
│ Dockerfile
│ Dockerfile.prod
│ entrypoint.sh
│ manage.py
│ requirements-dev.txt
│ requirements.txt
│ setup.cfg
│ tree.txt
│
└───project
│ config.py
│ __init__.py
│
├───api
│ │ ping.py
│ │ __init__.py
│ │
│ └───users
│ admin.py
│ crud.py
│ models.py
│ views.py
│ __init__.py
│
├───db
│ create.sql
│ Dockerfile
│
└───tests
conftest.py
pytest.ini
test_admin.py
test_config.py
test_ping.py
test_users.py
test_users_unit.py
__init__.py
I have added the D:\flask-react-auth\ to the 'Shared Folders' on virtualbox as well.

The answer seems obvious to me:
When you run the code as is
* it mounts the current working directory to '/usr/src/app'.
* The current working directory does not have a file 'entrypoint.sh'.
* It tries to run '/usr/src/app/entrypoint.sh' but it is not there so it fails.
When you comment out that volume mount
* I assume the image already has '/usr/src/app/entrypoint.sh' so it just works.
I think you probably should change the mounting code from
volumes:
- '.:/usr/src/app'
to
volumes:
- './services/users:/usr/src/app'

Related

dockerignore with Docker compose not ignoring .env file

Below you can see a representation of my project folder structure. I have two microservices which are called auth and profile, they are located inside the services directory. The docker-containers directory hold my docker-compose.yaml file in which I list all the images of my application.
.
├── services
│ ├── auth
│ │ ├── src
│ │ ├── dist
│ │ ├── .env
│ │ ├── package.json
│ │ ├── Dockerfile
│ │ ├── .dockerignore
│ ├── profile
│ │ ├── src
│ │ ├── dist
│ │ ├── .env
│ │ ├── package.json
│ │ ├── Dockerfile
│ │ ├── .dockerignore
└── docker-containers
├── docker-compose.yaml
Below is my docker-compose.yaml file in which I define the location of the auth service (and other images). I also want to override the local .env file with the values from the environment list. But when I run the docker compose project the values from my local .env file are still being used.
version: "3.8"
services:
auth:
build:
context: ../services/auth
container_name: auth-service
depends_on:
- redis
- mongo
ports:
- 3000:3000
volumes:
- ../services/auth/:/app
- /app/node_modules
command: yarn dev
env_file: ../services/auth/.env
environment:
FASTIFY_PORT: 3000
REDIS_HOST: redis
FASTIFY_ADDRESS: "0.0.0.0"
TOKEN_SECRET: 1d037ffb614158a9032c02f479b36f42dd33ba325f76a7692498c33839afc5d547eae2b47f0f4926b76b08fc91d19352
MONGO_URL: mongodb://root:example#mongo:27017
mongo:
image: mongo
container_name: mongo
restart: on-failure
ports:
- 2717:27017
volumes:
- ./mongo-data:/data
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
redis:
image: redis
container_name: redis
volumes:
- ./redis-data:/data
ports:
- 6379:6379
This is my Dockerfile and the .dockerignore file inside the auth service and based on my understanding the local .env file should not be copied to the docker context, because it is listed inside the .dockerignore file.
But when I log a value from my environment variables from the docker application it still logs the old value from my local .env file.
FROM node:16-alpine
WORKDIR /app
COPY ["package.json", "yarn.lock", "./"]
RUN yarn
COPY dist .
EXPOSE 3000
CMD [ "yarn", "start" ]
node_modules
Dockerfile
.env*
.prettier*
.git
.vscode/
The weird part is that the node_modules folder of the auth services is being ignored but for some reason the environment variables inside the docker container are still based on the local .env file.

Docker mosquitto - Error unable to load auth plugin

I really need your help !
I'm encountering a problem with the loading of a plugin in a docker mosquitto.
I tried to load it on a local version of mosquitto and it worked well.
The error return in the docker console is:
dev_instance_mosquitto_1 exited with code 13
The errors return in the log file of mosquitto are:
1626352342: Loading plugin: /mosquitto/config/mosquitto_message_timestamp.so
1626352342: Error: Unable to load auth plugin "/mosquitto/config/mosquitto_message_timestamp.so".
1626352342: Load error: Error relocating /mosquitto/config/mosquitto_message_timestamp.so: __sprintf_chk: symbol not found
Here is a tree output of the project:
mosquitto/
├── Dockerfile
├── config
│ ├── acl
│ ├── ca_certificates
│ │ ├── README
│ │ ├── broker_CA.crt
│ │ ├── mqtt.test.perax.com.p12
│ │ ├── private_key.key
│ │ └── server_ca.crt
│ ├── certs
│ │ ├── CA_broker_mqtt.crt
│ │ ├── README
│ │ ├── serveur_broker.crt
│ │ └── serveur_broker.key
│ ├── conf.d
│ │ └── default.conf
│ ├── mosquitto.conf
│ ├── mosquitto_message_timestamp.so
│ └── pwfile
├── data
│ └── mosquitto.db
└── log
└── mosquitto.log
Here is the Dockerfile:
FROM eclipse-mosquitto
COPY config/ /mosquitto/config
COPY config/mosquitto_message_timestamp.so /usr/lib/mosquitto_message_timestamp.so
RUN install /usr/lib/mosquitto_message_timestamp.so /mosquitto/config/
here is the docker-compose.yml:
mosquitto:
restart: always
build: ./mosquitto/
image: "eclipse-mosquitto/latests"
ports:
- "1883:1883"
- "9001:9001"
volumes:
- ./mosquitto/config/:/mosquitto/config/
- ./mosquitto/data/:/mosquitto/data/
- ./mosquitto/log/mosquitto.log:/mosquitto/log/mosquitto.log
user: 1883:1883
environment:
- PUID=1883
- PGID=1883
Here is the mosquitto.conf:
persistence true
persistence_location /mosquitto/data
log_dest file /mosquitto/log/mosquitto.log
include_dir /mosquitto/config/conf.d
plugin /mosquitto/config/mosquitto_message_timestamp.so
I'm using mosquitto 2.0.10 on a ubuntu serveur with the version 18.04.5 LTS.
In thanking you for your help.
Your best bet here is probably to set up a multi step Docker build file that uses an Alpine based image to build the plugin then copy it into the eclipse-mosquitto image.

Jumping around of the dockerfile/docker-compose context

My current projects structure looks something like that:
/home/some/project
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
Compose makes four containers, two of them are dev & prod version of application, who uses appropriate prod & dev files. As you can see, following structure root is little overloaded, so i'd like to move all the deployment staff into the separate directory to make the following:
/home/some/project
├───deployment
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
The idea is to receieve in the result following structure on the docker host:
host
├───dev
│ ├───src
│ └───users
├───prod
│ ├───src
│ └───users
└───project
├───deployment
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
and two containers app_dev and app_prod, which volumes are appropriate mounted into folders /host/dev and /host/prod.
I tried multiple solutions found here, but all of them in different variations returned the following errors:
ERROR: Service 'app_dev' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder264200969/ca.cer: no such file or directory
ERROR: Service 'app_dev' failed to build: COPY failed: Forbidden path outside the build context: ../ca.cer ()
Error is always appears while docker-compose is trying to build an image, on that string:
COPY deployment/ca.cer /code/
Please tell me how to achieve the desired result.
The Deployment folder is outside of the build context. Docker will pass all the files inside the deployment file as the build context. However the deployment folder itself is outside of it.
Change your copy statement to be instead :
COPY ./ca.cer /code/
Since in the image you are already in that folder.

How to setup pm2-logrotate for docker with nodejs running pm2?

I have the docker image from keymetrics/pm2:8-jessie and running my nodejs application well with pm2. I tried to add pm2-logrotate for sizing the log with date. I added the following in my Dockerfile. The module pm2-logrotate can be started but the Target PID is null. Anyone can help please?
FROM keymetrics/pm2:8-jessie
RUN npm install
RUN pm2 install pm2-logrotate
RUN pm2 set pm2-logrotate:retain 90
RUN pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
RUN pm2 set pm2-logrotate:max_size 10M
RUN pm2 set pm2-logrotate:rotateInterval 0 0 * * *
RUN pm2 set pm2-logrotate:rotateModule true
RUN pm2 set pm2-logrotate:workerInterval 10
ENV NODE_ENV=$buildenv
ENV NPM_CONFIG_LOGLEVEL warn
CMD ["sh", "-c", "pm2-runtime start pm2.${NODE_ENV}.config.js"]
pm2 ls
┌──────────────┬────┬─────────┬─────────┬─────┬────────┬─────────┬────────┬─────┬────────────┬──────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────────┼────┼─────────┼─────────┼─────┼────────┼─────────┼────────┼─────┼────────────┼──────┼──────────┤
│ app_server │ 1 │ 1.0.0 │ cluster │ 150 │ online │ 1 │ 2h │ 0% │ 104.4 MB │ root │ disabled │
└──────────────┴────┴─────────┴─────────┴─────┴────────┴─────────┴────────┴─────┴────────────┴──────┴──────────┘
Module
┌───────────────┬────┬─────────┬─────┬────────┬─────────┬─────┬───────────┬──────┐
│ Module │ id │ version │ pid │ status │ restart │ cpu │ memory │ user │
├───────────────┼────┼─────────┼─────┼────────┼─────────┼─────┼───────────┼──────┤
│ pm2-logrotate │ 2 │ 2.7.0 │ 205 │ online │ 0 │ 0% │ 44.5 MB │ root │
└───────────────┴────┴─────────┴─────┴────────┴─────────┴─────┴───────────┴──────┘
One reason is as pm2 logrotate is not the primary process of the Docker container, but a managed process by pm2, so you can verify this behaviour by stopping main process that is defined pm2.${NODE_ENV}.config.js your container will die no matter if pm2-logrotate is running.
Also, I do not think it should be null, it should be something like
pm2 ls
┌─────┬──────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 1 │ www │ default │ 0.0.0 │ fork │ 26 │ 13s │ 0 │ online │ 0% │ 40.3mb │ root │ disabled │
└─────┴──────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
Module
┌────┬───────────────────────────────────────┬────────────────────┬───────┬──────────┬──────┬──────────┬──────────┬──────────┐
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
├────┼───────────────────────────────────────┼────────────────────┼───────┼──────────┼──────┼──────────┼──────────┼──────────┤
│ 0 │ pm2-logrotate │ 2.7.0 │ 17 │ online │ 0 │ 0.5% │ 43.1mb │ root │
└────┴───────────────────────────────────────┴────────────────────┴───────┴──────────┴──────┴──────────┴──────────┴──────────┘
Also will suggest to use the alpine base image as the above image seem very heavy, the below image is 150MB while the above image is arround 1GB.
FROM node:alpine
RUN npm install pm2 -g
RUN npm install
RUN pm2 install pm2-logrotate
RUN pm2 set pm2-logrotate:retain 90
RUN pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
RUN pm2 set pm2-logrotate:max_size 10M
RUN pm2 set pm2-logrotate:rotateInterval 0 0 * * *
RUN pm2 set pm2-logrotate:rotateModule true
RUN pm2 set pm2-logrotate:workerInterval 10
ENV NODE_ENV=$buildenv
ENV NPM_CONFIG_LOGLEVEL warn
WORKDIR /app
COPY . /app
CMD ["sh", "-c", "pm2-runtime start confi"]

Unable to connect second org peer to channel in HLF

1 I am following below link to setup my first network on Hyperledger Fabric http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
Whatever steps are mentioned in this setup i am pretty much able to do all.My all docker container working good The issue is than when I try to join other peers of second org. to the channel, using below
"Join peer0.dfarmretail.com to the channel."
docker exec -e "CORE_PEER_LOCALMSPID=DfarmretailMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmretail.com/msp" peer0.dfarmretail.com peer channel join -o orderer.dfarmadmin.com:7050 -b dfarmchannel.block
however I am getting below error
error: error getting endorser client for channel: endorser client failed to connect to peer0.dfarmretail.com:8051: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp 172.20.0.6:8051: connect: connection refused"
Please see below files
my Docker-composer.yaml
version: '2'
networks:
dfarm:
services:
ca.dfarmadmin.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.dfarmadmin.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.dfarmadmin.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/ad62c9f5133ad87c5f94d6b3175eb059395b5f68caf43e439e6bb7d42d8296e4_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/dfarmadmin.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca.dfarmadmin.com
networks:
- dfarm
orderer.dfarmadmin.com:
container_name: orderer.dfarmadmin.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 7050:7050
volumes:
- ./config/:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/dfarmadmin.com/orderers/orderer.dfarmadmin.com/:/etc/hyperledger/msp/orderer
- ./crypto-config/peerOrganizations/dfarmadmin.com/peers/peer0.dfarmadmin.com/:/etc/hyperledger/msp/peerDfarmadmin
networks:
- dfarm
peer0.dfarmadmin.com:
container_name: peer0.dfarmadmin.com
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.dfarmadmin.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=DfarmadminMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.dfarmadmin.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dfarmadmin.com:7051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/dfarmadmin.com/peers/peer0.dfarmadmin.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/dfarmadmin.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.dfarmadmin.com
- couchdb
networks:
- dfarm
peer0.dfarmretail.com:
container_name: peer0.dfarmretail.com
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.dfarmretail.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=DfarmretailMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.dfarmretail.com:8051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dfarmretail.com:8051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 8051:8051
- 8053:8053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/dfarmretail.com/peers/peer0.dfarmretail.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/dfarmretail.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.dfarmadmin.com
- couchdb
networks:
- dfarm
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 5984:5984
networks:
- dfarm
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.dfarmadmin.com:7051
- CORE_PEER_LOCALMSPID=DfarmadminMSP
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dfarmadmin.com/users/Admin#dfarmadmin.com/msp
- CORE_CHAINCODE_KEEPALIVE=10
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
networks:
- dfarm
depends_on:
- orderer.dfarmadmin.com
- peer0.dfarmadmin.com
- peer0.dfarmretail.com
- couchdb
my start.sh
#!/bin/bash
#
# Exit on first error, print all commands.
set -ev
# don't rewrite paths for Windows Git Bash users
export MSYS_NO_PATHCONV=1
FABRIC_START_TIMEOUT=90
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose.yml up -d ca.dfarmadmin.com orderer.dfarmadmin.com peer0.dfarmadmin.com peer0.dfarmretail.com couchdb
# wait for Hyperledger Fabric to start
# incase of errors when running later commands, issue export FABRIC_START_TIMEOUT=<larger number>
export FABRIC_START_TIMEOUT=10
#echo ${FABRIC_START_TIMEOUT}
sleep ${FABRIC_START_TIMEOUT}
# Create the channel
docker exec -e "CORE_PEER_LOCALMSPID=DfarmadminMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmadmin.com/msp" peer0.dfarmadmin.com peer channel create -o orderer.dfarmadmin.com:7050 -c dfarmchannel -f /etc/hyperledger/configtx/channel.tx
# Join peer0.dfarmadmin.com to the channel.
docker exec -e "CORE_PEER_LOCALMSPID=DfarmadminMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmadmin.com/msp" peer0.dfarmadmin.com peer channel join -b dfarmchannel.block
# Join peer0.dfarmretail.com to the channel.
docker exec -e "CORE_PEER_LOCALMSPID=DfarmretailMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmretail.com/msp" peer0.dfarmretail.com peer channel join -o orderer.dfarmadmin.com:7050 -b dfarmchannel.block
its my project folder structure.
├── config
│ ├── channel.tx
│ ├── DfarmadminMSPanchors.tx
│ ├── DfarmretailMSPanchors.tx
│ └── genesis.block
├── configtx.yaml
├── crypto-config
│ ├── 1
│ ├── ordererOrganizations
│ │ └── dfarmadmin.com
│ │ ├── ca
│ │ │ ├── 5f0077f4811e16e3bac8b64ae22e35bd52f3205538587e0a52eaa49e86b57c4c_sk
│ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ ├── orderers
│ │ │ └── orderer.dfarmadmin.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── ecda7305295e86d0890aea73874c80c21a9b29dc04435ef521f1025194a366c8_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── orderer.dfarmadmin.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── server.crt
│ │ │ └── server.key
│ │ ├── tlsca
│ │ │ ├── 199db47c8e231c6cff329e1fdfa8b522ef7b74847808f61045057b56498f49fd_sk
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── users
│ │ └── Admin#dfarmadmin.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 0c5004c87035e89b735940b5b446d59d138c1af8f42b73980c7d7b03373ee333_sk
│ │ │ ├── signcerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── peerOrganizations
│ ├── dfarmadmin.com
│ │ ├── ca
│ │ │ ├── ad62c9f5133ad87c5f94d6b3175eb059395b5f68caf43e439e6bb7d42d8296e4_sk
│ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ ├── peers
│ │ │ ├── peer0.dfarmadmin.com
│ │ │ │ ├── msp
│ │ │ │ │ ├── admincerts
│ │ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ │ ├── cacerts
│ │ │ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ │ │ ├── keystore
│ │ │ │ │ │ └── 66f1271392ea3ce4d3548e91ee5620591e79e538e36a69b38786b3f11f3c53e2_sk
│ │ │ │ │ ├── signcerts
│ │ │ │ │ │ └── peer0.dfarmadmin.com-cert.pem
│ │ │ │ │ └── tlscacerts
│ │ │ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ │ │ └── tls
│ │ │ │ ├── ca.crt
│ │ │ │ ├── server.crt
│ │ │ │ └── server.key
│ │ │ └── peer0.dfarmretail.com
│ │ │ └── msp
│ │ │ └── keystore
│ │ ├── tlsca
│ │ │ ├── f6f49b0ff9c7f850e5f655dfbb88ce7b8c07f3f872d151346ac65c6f5f2ef80d_sk
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── users
│ │ ├── Admin#dfarmadmin.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── 9c65737a78159bf977b9e38299c9c8e02278f76c3d4650caf32a4da845947547_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── client.crt
│ │ │ └── client.key
│ │ └── User1#dfarmadmin.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── User1#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 458f1f699493828d88507fabb9ad2dab4fc2cc8acdaf4aa65c1fda12710227dd_sk
│ │ │ ├── signcerts
│ │ │ │ └── User1#dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── dfarmretail.com
│ ├── ca
│ │ ├── 8f839598652d94f6ab6cb3d0f15390df5fe8dd7b6bb88c5c3b75205b975bc8d2_sk
│ │ └── ca.dfarmretail.com-cert.pem
│ ├── msp
│ │ ├── admincerts
│ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ ├── cacerts
│ │ │ └── ca.dfarmretail.com-cert.pem
│ │ └── tlscacerts
│ │ └── tlsca.dfarmretail.com-cert.pem
│ ├── peers
│ │ └── peer0.dfarmretail.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmretail.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 2115fb2c52372041918517c2dcef91cb7cc66ca4a987a1606a98e9b75d78ab91_sk
│ │ │ ├── signcerts
│ │ │ │ └── peer0.dfarmretail.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmretail.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── server.crt
│ │ └── server.key
│ ├── tlsca
│ │ ├── 8b26e70a303598e0012852426ac93be726210c5911baf4695785cf595bad3041_sk
│ │ └── tlsca.dfarmretail.com-cert.pem
│ └── users
│ ├── Admin#dfarmretail.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmretail.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 7ac01c0d8b0b4f3245d1e68fe34d34a2e1727059c459c1418b68b66870328eb2_sk
│ │ │ ├── signcerts
│ │ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmretail.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── User1#dfarmretail.com
│ ├── msp
│ │ ├── admincerts
│ │ │ └── User1#dfarmretail.com-cert.pem
│ │ ├── cacerts
│ │ │ └── ca.dfarmretail.com-cert.pem
│ │ ├── keystore
│ │ │ └── e40665832cc9d4fce41f72b04505655f9eb46e3b704547987f03863de37331b5_sk
│ │ ├── signcerts
│ │ │ └── User1#dfarmretail.com-cert.pem
│ │ └── tlscacerts
│ │ └── tlsca.dfarmretail.com-cert.pem
│ └── tls
│ ├── ca.crt
│ ├── client.crt
│ └── client.key
├── crypto-config.yaml
├── docker-compose.yml
├── generate.sh
├── init.sh
├── README.md
├── start.sh
├── stop.sh
└── teardown.sh
docker logs for dfarmretail container
docker logs orderer.dfarmadmin.com
I have tried lot to rectify the issue however I am unable to so could you please help on this
thanks in advance
Is your peer for peer0.dfarmretail.com running OK? (I would check the log for it) In your docker compose file you are configuring both your peers to use the same CouchDB container - but you need to configure a separate CouchDB for each peer. The retail peer maybe failing because of some problem with the CouchDB container already being allocated to the admin peer. The 2nd CouchDB container will have to use a different port, and the retail peer will have to be changed to connect to that new port.
I notice that you are exporting port 7053 on you peer. Port 7053 was used on 'earlier' versions of Fabric for the eventhub I think - what version of Fabric are you using?
You don't have to use CouchDB for your peers, but if you configure your peers to use CouchDB ( CORE_LEDGER_STATE_STATEDATABASE=CouchDB ) then you need a separate CouchDB container for each.
Following updates to the question and comment:
The original error shows a "connection refused" but from the log it looks like the peer is still running. So it looks like some networking error - there is also a line in the dfarmretail peer log showing that the chaincode listen address is using port 7052, whereas I think it should have 8052.
I suggest you add these 2 config lines to the dfarmadmin peer in the docker compose file:
- CORE_PEER_LISTENADDRESS=0.0.0.0:7051
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
and for dfarmretail peer:
- CORE_PEER_LISTENADDRESS=0.0.0.0:8051
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:8052
This should clear any port ambiguation and make the peers listen on all interfaces.
You could try the free tool from www.chaincoder.org which will generate all config files for you and let you easily code and deploy chaincodes on peers. Follow here Chaincoder

Resources