Failed to connect Hyperledger Explorer to Fabric project - docker

I have a Fabric project up and running with 7 org/5 channel setup with each org having 2 peers. Everything is up and running. Now i am trying to connect Hyperledger Explorer to view the blockchain data. However there is an issue i am facing in the configuration part.
Steps i performed:
Pulled the images and added the following containers in a single docker-compose.yaml file for startup: hyperledger/explorer-db:latest, hyperledger/explorer:latest, prom/prometheus:latest, grafana/grafana:latest
Edited the created containers with the respective configurations needed and volume mounts.
volumes:
./config.json:/opt/explorer/app/platform/fabric/config.json
./connection-profile:/opt/explorer/app/platform/fabric/connection-profile/
./crypto-config:/tmp/crypto
walletstore:/opt/wallet
Since its a multi-org setup i edited the config.json files and accordingly pointed them to the respective connection profiles as per the organization setup
{
"network-configs": {
"org1-network": {
"name": "Sample-1",
"profile": "./connection-profile/org1-network.json"
}, and so on for other orgs
Edited the prometheus.yml to put in the static configurations
static_configs:
targets: ['localhost:8443','localhost:8444', and so on for every peer service]
targets: ['orderer0-service:8443','orderer1-service:8444', and so on for every orderer service]
Edited the peer services in my docker-compose.yaml file to add in the below values on each peer config
CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9449 # RESTful API for Hyperledger Explorer
CORE_METRICS_PROVIDER=prometheus # Prometheus will pull metrics
Issue: (Now resolved - see below)
It seems that explorer isn't able to find my Admin#org1-cert.pem' path in the given location. But i double checked everything and that particular path is present and also accessible. All permissions to that path is also open to avoid any permissioning issue.
Path in question [Full path is provided not the relative path]: /home/auro/Desktop/HLF/fabricapp/crypto-config/peerOrganizations/org1/users/Admin#org1/msp/signcerts/Admin#org1-cert.pem
The config files is also setup properly. I am unable to find a way to correct way. Would be really glad if someone can tell me what is going on with this path issue, because i tried everything i think i could but still not able to get it working.
Other details:
Using Hypereldger Explorer - v1.1.0 - Pulling the latest docker image
Using Hyperledger Fabric - v.1.4.6 - Pulling the specific version from docker hub for this
Update: Okay, i managed to solve this. Apparently the path to be given in the config file isnt that of the local system but of the docker container. I replaced the path with the path to my docker container where the files are placed and it worked.
New Problem -1: (Now solved) Now i am getting an error as shown below. Highlighted in yellow
I had a look at peer-0-org-1-service node logs when this happened and this is the error it had logged.
2020-07-20 04:38:15.995 UTC [core.comm] ServerHandshake -> ERRO 028 TLS handshake failed with error tls: first record does not look like a TLS handshake server=PeerServer remoteaddress=172.18.0.53:33300
Update: Okay, i managed to solve this too. There were 2 issues. The TLS handshake wasn't happening because the TLS certificate wasn't set to true in the config. The second issue of STREAM removed happened because the url in the config wasnt specified as grpc. Once changes were done, it resolved
New Problem -2: (Current Issue)
It seems that the channel issue is there. Somehow it still shows "not assigned to this channel" and a new error of "Error: 14 UNAVAILABLE: failed to connect to all addresses". This same error happened for all the peers across 7 orgs.
And not to mention suddenly the peers are not able to talk to each other.
Error Received: Could not connect to Endpoint: peer0-org2-service:7051, InternalEndpoint: peer0-org2-service:7051, PKI-ID: , Metadata: : context deadline exceeded
I checked the peer channel connection details and everything seems to be in order. Stuck in this for now. Let me know if anyone has any ideas.

As you can see from the edits i got one problem solved before another came along. After banging my head for a lot of times, i removed the entire build, rebuilt it again with my corrections given above and it simply started working.

You seem to be using old Explorer image. I strongly recommend to use the latest one v1.1.1. Note: There are some updates of settings format in connection profile (e.g. login credential of Explorer). Please refer README-CONFIG for detail.

Related

Error on etcd health check while setting up RKE cluster

i'm trying to set up a rke cluster, the connection to the nodes goes well but when it starts to check etcd health returns:
failed to check etcd health: failed to get /health for host [xx.xxx.x.xxx]: Get "https://xx.xxx.x.xxx:2379/health": remote error: tls: bad certificate
If you are trying to upgrade the RKE and facing this issue then it could be due to the missing of kube_config_<file>.yml file from the local directory when you perform rke up.
This similar kind of issue was reported and reproduced in this git link . Can you refer to the work around and reproduce it by using the steps provided in the link and let me know if this works.
Refer to this latest SO and doc for more information.

WebSocket connection to 'wss://postfacto.[mydomain].de/cable' failed

I've deployed postfacto version 4.3.11 by using the official docker image.
Additionally I did the following:
Added google Auth
Set DISABLE_SSL_REDIRECT to "false" (Not sure what this does)
Set USE_POSTGRES_FOR_ACTION_CABLE to "true" ( to not have a separate message queue via redis as documented in section Removing Redis dependency.
Added nginx-tls-proxy server as reverse proxy
Everything seems to be working just fine, but when checking google-chrome dev-tools, I can see the error message shown in the attached screenshot
WebSocketConnectionFailed.png.
Could any of you please tell me, what is causing this and if I can solve it?
Just let me know if you need more information :)

Running Kafka connect in standalone mode, having issues with offsets

I am using this Github repo and folder path I found: https://github.com/entechlog/kafka-examples/tree/master/kafka-connect-standalone to run Kafka connect locally in standalone mode. I have made some changes to the Docker compose file but mainly changes that pertain to authentication.
The problem I am now having is that when I run the Docker image, I get this error multiple times, for each partition (there are 10 of them, 0 through 9):
[2021-12-07 19:03:04,485] INFO [bq-sink-connector|task-0] [Consumer clientId=connector- consumer-bq-sink-connector-0, groupId=connect-bq-sink-connector] Found no committed offset for partition <topic name here>-<partition number here> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1362)
I don't think there are any issues with authenticating or connecting to the endpoint(s), I think the consumer (connect sink) is not sending the offset back.
Am I missing an environment variable? You will see this docker compose file has CONNECT_OFFSET_STORAGE_FILE_FILENAME: /tmp/connect.offsets, and I tried adding CONNECTOR_OFFSET_STORAGE_FILE_FILENAME: /tmp/connect.offsets (CONNECT_ vs. CONNECTOR_) and then I get an error Failed authentication with <Kafka endpoint here>, so now I'm just going in circles.
I think you are focused on the wrong output.
That is an INFO message
The offsets file (or topic in distributed mode) is for source connectors.
Sink connectors use consumer groups. If there is no found offset found for groupId=connect-bq-sink-connector, then the consumer group didn't commit it.

While using Harbor create 'New Registry Endpoint' unhealthy issue

While using harbor create 'New Registry Endpoint' unhealthy issue as below:
I followed the https cert generation instruction as link: https://goharbor.io/docs/1.10/install-config/configure-https/
Harbor installed successfully, and logged in to create the 'New Registry Endpoint', and showed the issue 'registry https://hub.csp.cn is unhealthy: unhealthy'
the issue image as below
I checked the harbor logs warning
'Jun 25 09:40:26 172.18.0.1 core[1034]: 2020-06-25T01:40:26Z [ERROR] [/replication/adapter/native/adapter.go:154]: failed to ping registry https://hub.csp.cn: Head https://hub.csp.cn/v2/: Get https://hub.csp.cn/v2/: dial tcp: lookup hub.csp.cn on 127.0.0.11:53: no such host'
The vmware IP is '192.168.111.100' and domain mapping is 'hub.csp.cn'
I followed the issue log to check the source as below:
code screenshot
I'm not familiar with the Go source code.
Does anyone have any idea about the issue?
Thanks.
I know this is old but thought I might be able to help someone else.
I had this happen using Terraform to add a dockerhub registry to Harbor. It was actually dockerhub telling me it didn't like my credentials. I hard-coded them to test and it worked.
Pretty sure I just needed to add double quotes around my token (which was an environment variable) instead of single quotes... That seems to be what fixed it.

Hyperledger - Blockchain Peers not connecting - Docker container properties

I am creating a sample Blockchain network using tutorial https://hyperledger-fabric.readthedocs.io/en/release-1.2/build_network.html. I am facing an error while connecting the peers :
Error: failed to create deliver client: orderer client failed to connect to orderer.example.com:7050: failed to create new connection: context deadline exceeded.
I found a probable solution here which I would like to test but I need help for below :
How to update default network of the containers
How to add property for each container.
While accessing my etc/docker directory I am getting error 'Server returned empty listing for directory '/etc/docker' and also it says permission denied when I try to access it from terminal. Any help will be appreciated.
There is no need for making any changes in Docker containers. I faced similar issue, you can clean up the system space or if you are using a VM you install a fresh network in a new VM(assuming you already have all configuration files copied).

Resources