Hyperledger Composer setting up connection.json - docker

Hi thank you for all guys watching this article.
Cuz I want to use Hyperledger composer, I deployed orderer, peer, ca and other things. And I got success until creating channel and joining channel
(I believe that this is true cuz I finished making channels, joining peers to join that channel, installing and instantiating chaincode.)
So after that I did
"composer network install" command and I got error that there is no response from peers.
(And "Response from attempted peer comms was an error: Error: 14 UNAVAILABLE: EOF")
So I started to think that there is a problem on the file named "connection.json". But I don't know specifically how to edit that file.
I got response like below commanding "docker service ls" and "docker network inspect fabric"
enter image description here
and my connection.json file looks like this
enter image description here
And I referred to this page to do Hyperledger Fabric on multiple hosts.
https://medium.com/#malliksarvepalli/hyperledger-fabric-on-multiple-hosts-using-docker-swarm-and-compose-f4b70c64fa7d
And this is the screenshot after installing business network
enter image description here

I think your fabric network is not running!
open a terminal and go to your fabric-dev-servers directory and ./startFabric.sh
if you facing any error there, like some container already exists do ./teardownFabric.sh first and then run above start command again.
once a network is running successfully then you need to create admin card by running ./createPeerAdminCard.sh

Could you confirm that all orderer, peers and CAs are successfully launched on each machine? 'docker ps' command shows which services are running. If you use 'docker ps -a', you can find which service is stopped.
From the all of docker-compose files, following container name should be listed by 'docker ps'
orderer
: orderer
org1
: ca1
: org1peer0
: org1peer1
: org1cli
org2
: ca2
: org2peer0
: org2peer1
: org2cli
Could you check this is correct?
Are you running this project on 3 machines or 3 cloud instances?

Related

issues running multiple build jobs in parallel usin DIND

We have a local gitlab instance using 3 runners which works fine when we have a single build job running.
Sadly, when launching 3 build jobs using dind in parallel, it fails with a multitude of errors:
sometimes unable to login to docker to pull the image for cache
sometimes the login succeeds and it fails in the build
but in both cases it complains about the certificate:
failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "docker:dind CA")
Suspecting that the certificates get crashed by the other build job, we decided to separate the folder used for certificates, so it is unique to each runner, sadly the issue remains.
We have also noticed that DOCKER_HOST="tcp://docker:2376" the docker address is random, and many times returns the same value, which means again they are using the same resources.
I have found a guide on how to manually use a script to ensure each job is connected to its unique dind service (HERE), but since the article is over 5 years old, I wonder if that is still applicable or am I doing something wrong.
Please share any advice or guidance on where to look.

docker build is disabled error when installing my chaincode on hyperledger fabric

I am creating a hyperledger fabric network using the following hyperledger fabric operator for kubernetes https://github.com/hyperledger-labs/hlf-operator I have my cluster configured in aws eks and it is currently running 3 nodes. I am following the documentation and so far all the steps of the implementation are working without problem, but when installing my chaincode it shows me the following message:
'InstallChaincode': could not build chaincode: docker build failed: docker build is disabled
Validate and change docker permissions but I don't understand what I am missing so that it can work and install my chaincode.
I think it may be a permissions error in the eks, I am also validating the permissions
I encountered the same problem and I finally solved it. The problem is when you create your peer node right now (as of July 28, 2022), the version defaults to 2.3.0-v0.0.2 (you can find this kubectl hlf peer create --help and see the description next to the --version flag). This peer version happens to be incompatible when deploying ccaas - chaincode as a service. So, the solution is to manually override the version using the --version flag while creating the peer node. Peer version 2.4.1-v0.0.4 solved this for me.
Please see the below command while creating a peer node for org1.
kubectl hlf peer create --statedb=couchdb --storage-class=standard --enroll-id=org1-peer --mspid=Org1MSP --enroll-pw=peerpw --capacity=5Gi --name=org1-peer0 --ca-name=org1-ca.fabric --version=2.4.1-v0.0.4 --namespace=fabric
Note the above steps apply only when you are using the peer image from quay.io/kfsoftware/fabric-peer which is the default image. If you want to use other images use the --image tag. Repeat the same steps while creating every peer node. This should solve your problem. Hope this helps!

ERROR: traefik, xdbautomationworker, Container is unhealthy

Trying to create sitecore 10 image using Docker on Windows 10 Enterprise locally but getting unhealthy containers. Please help me out as I have tried various steps that was updated in the forums.
Getting below errors:
Creating network "sitecore-xp0_default" with the default driver
Creating sitecore-xp0_solr_1 ... done
Creating sitecore-xp0_mssql_1 ... done
Creating sitecore-xp0_id_1 ... done
Creating sitecore-xp0_solr-init_1 ... done
Creating sitecore-xp0_xconnect_1 ... done
Creating sitecore-xp0_cm_1 ... done
ERROR: for cortexprocessingworker Container "992574e988e3" is unhealthy.
ERROR: for xdbautomationworker Container "992574e988e3" is unhealthy.
ERROR: for xdbsearchworker Container "992574e988e3" is unhealthy.
ERROR: for traefik Container "933b548fc2f9" is unhealthy.
ERROR: Encountered errors while bringing up the project.
Checked the following things:
docker-compose stop on Powershell.
docker-compose down on Powershell.
iisreset /stop on Powershell to make sure that the required ports are free.
docker-compose up -d on Powershell.
Stopped, removed the container and executed the command docker-compose.exe up --detach multiple times but no luck.
Check the .env file and make sure SITECORE_LICENSE has a value.
You may need to run the init.ps1 file.
Based on the logs now provided in the comments above, my suggestion would be to check the collection SQL connection string, to the shardsmanager database.
You can inspect the SQL container in docker for Windows and find the IP address of the SQL server. Connect to that using ssms and try connecting with the creds you have in current string.
Edit: looking again at the exception, it looks like it can't find the SQL server. Yet the CM server appears to not have a problem finding the same server. So compare the web/master/core connection string to the collection one. I'm guessing the SQL server portion will be different?

foundationdb running docker image macos database unavailable

I am trying to run foundation db using a docker image in Macos as below.
docker run --init --rm --name=fdb-0 foundationdb/foundationdb:6.2.22
Starting FDB server on 172.17.0.2:4500
This seems to start. But then I connect to fdb cli after logging into the container I get the following error statuses.
docker exec -it fdb-0 /bin/bash
root#9e8bb6985be5:/var/fdb# fdbcli
Using cluster file `/var/fdb/fdb.cluster'.
The database is unavailable; type `status' for more information.
Welcome to the fdbcli. For help, type `help'.
fdb> status
Using cluster file `/var/fdb/fdb.cluster'.
The coordinator(s) have no record of this database. Either the coordinator
addresses are incorrect, the coordination state on those machines is missing, or
no database has been created.
172.17.0.2:4500 (reachable)
Unable to locate the data distributor worker.
Unable to locate the ratekeeper worker.
I saw this issue https://forums.foundationdb.org/t/fdbcli-access-external-docker/1069. But, could not successfully run in host network as well. Any help would be appreciated.
Try running fdbcli with fdbcli --exec "configure new single memory ; status". This will start the new database with single redundancy memory mode.

Unable to run rabbitmq using marathon mesos

I am unable to run rabbitmq using marathon/mesos framework. I have tried it with rabbitmq images available in docker hub as well as custom build rabbitmmq docker image. In the mesos slave log I see the following error:
E0222 12:38:37.225500 15984 slave.cpp:2344] Failed to update resources for container c02b0067-89c1-4fc1-80b0-0f653b909777 of executor rabbitmq.9ebfc76f-ba61-11e4-85c9-56847afe9799 running task rabbitmq.9ebfc76f-ba61-11e4-85c9-56847afe9799 on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/13197/cgroup: Failed to open file '/proc/13197/cgroup': No such file or directory
On googling I could find one hit as follows
https://github.com/mesosphere/marathon/issues/632
Not sure if this is the issue even I am facing. Anyone tried running rabbitmq using marathon/mesos/docker?
Looks like the process went away (likely crashed) before the container was set up. You should check stdout and stderr to see what happened, and fix the root issue.
"cmd": "", is the like'y culprit. I'd look at couchbase docker containers for a few clues on how to get it working.

Resources