hyperledger fabric v1.0 using query on methods that change ledger - hyperledger

While going over v1.0 examples, I ran into a confusion.
As far as I understand, query does not change the ledger as it is executed locally (e.g. no ordering, committing, endorsing involved).
But that is from the caller's point of view. Within chaincode, it just executes whatever the client calls given all of CAs and info are valid.
So for example, if I call
peer chaincode query -C mychannel -n chaincode -c '{"Args":["query", "a"]}'
this would be ok. It just queries a.
But if I call
peer chaincode query -C mychannel -n chaincode -c '{"Args":["**invoke**", "a"]}'
What would be the behavior given that invoke involves writing on the ledger?
Also on the other hand, if I call invoke on query method
(e.g. peer chaincode invoke ~~~ {"Args":["query", "a"]})
What would be the behavior?
As far as I understand, actual chaincode cannot distinguish whether it is query or invoke. It just executes the chaincode method.
Am I far off?

Indeed this is a bit confusing, especially taking into account the thing not very obvious when you using the cli tool. Here is the thing, when you are using peer cli command with invoke the flow works as following:
You send transaction proposal to the endorsing peer
Endorsing peers executes transaction simulations and signs the results
Client receives the results and send them out to the ordering service
Ordering service cut the block
Block delivered to the peers
Peers validates transactions
Eventually block appended to the blockchain
Now here is the difference, when you running peer cli with query parameters key it will do only:
You send transaction proposal to the endorsing peer
Endorsing peers executes transaction simulations and signs the results
Therefore, since results is not sent to ordering service there is no effect on the final peer state, since even if you chaincode has made any changes it won't be committed.

Related

Understanding Docker Container Internals in Hyperledger Fabric

I think to understood how fabric mainly works and how consens is reached. What I am still missing in the documentation is the part of what happens inside of a docker container of fabric to take part in communication process.
So, communication starting from a client (e.g. an app) takes place in using gRPC messages between peers and orderer.
But what happens inside of the containers?
I imagine it for myself as a process that is only receiving gRPC message and answering them in using functions in the background of a peer/orderer, to hands out its response for further processing in another unit like the client to collect the responses of multiple peers for a smart contract.
But what happens really inside a container? I mean, a container spawns, when the docker image file is loaded and launched by the yaml config file. But what is started there inside of it (is there only a single peer binary started, e.g. like the command "peer node start") - I mean the compiled go binary file "peer" only?? What is listening? What is responding there? I discovered only one port for every container that is exposed out. This seems for me to be the gate for gRPC (cause it is often used as Port ID: **51).
The same questions goes for the orderer, the chaincode and the cli. How are they talking to each other or is gRPC the only way of communication and processing (excluded of the discovery service and gossip, how is this started inside of the containers (in using the yaml files for lauchun only or is there further internal configuration or a startupscript in the image files (cause I cannot look inside the images, only login on running containers while runtime).
When your client sends request to one of the peers, peer instance checks if requested chaincode (CC) installed on it. If CC not installed: Obviously you'll get an error.
If CC is installed: Peer checks if a dedicated container is already started for the given CC and corresponding version. If container is started, peer sends transaction request to that CC instance and returns back the response to your client after signing the transaction. Signing guarantees that response is really sent by that peer.
If container not started:
It builds a docker image and starts that instance (docker container). New image would be based on one of the hyperledger images. i.e. if your CC is GO, then hyperledger/baseos, which is very basic linux os, will be used. This new image contains CC binary and META-DATA as well.
That peer instance is using underlying (your) machine's docker server to do all of those. That's the reason why we need to pass /var/run:/host/var/run into volume mapping and CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock into environment variables.
Once the CC container starts, it connects to its parent peer node which is defined with
CORE_PEER_CHAINCODEADDRESS attribute. Peer dictates to child (probably during image creation) to use this address, so they obey. Peer node defines its own listen URL with CORE_PEER_CHAINCODELISTENADDRESS attribute.
About your last question; communication is with gRPC in between nodes also with clients. If TLS is enabled, then it's for sure secure communication. Entry point for orderers to know about peers and peers know about other organizations' peers is the definition of anchor peers defined during channel creation. Discovery service is running in peer nodes, so they can hold a close to real-time network layout. Discovery service also provides peers' identity, that's how clients can detect other organizations' peers when endorsement policy requires multiple organizations' endorsement policy (i.e. if policy look like AND(Org1MSP.member, Org2MSP.member)).

WSO2 MI Infinite loop on invalid request line

I run a very simple micro integrator service that only has 1 proxy service and a single sequence. In this sequence the incoming XML message is transferred to amazon SQS service.
If I run this in the Integration Studio on the instance that comes built in I have no problems. However, when I package the file into a CAR and feed it to the docker instance it will boot up and instantly gets bombarded with requests? That is to say, the following logs take over and the container can no longer be manually stopped:
[2020-04-15 12:45:44,585] INFO
{org.apache.synapse.transport.passthru.SourceHandler} - Writer null
when calling informWriterError ^[[?62;c^[[?62;c[2020-04-15
12:45:46,589] ERROR
{org.apache.synapse.transport.passthru.SourceHandler} - HttpException
occurred org.apache.http.ProtocolException: Invalid request line:
ÇÃ^ú§ß¡ðO©%åË*29xÙVÀ$À(=À&À*kjÀ at
org.apache.http.impl.nio.codecs.AbstractMessageParser.parse(AbstractMessageParser.java:208)
at
org.apache.synapse.transport.http.conn.LoggingNHttpServerConnection$LoggingNHttpMessageParser.parse(LoggingNHttpServerConnection.java:407)
at
org.apache.synapse.transport.http.conn.LoggingNHttpServerConnection$LoggingNHttpMessageParser.parse(LoggingNHttpServerConnection.java:381)
at
org.apache.http.impl.nio.DefaultNHttpServerConnection.consumeInput(DefaultNHttpServerConnection.java:265)
at
org.apache.synapse.transport.http.conn.LoggingNHttpServerConnection.consumeInput(LoggingNHttpServerConnection.java:114)
at
org.apache.synapse.transport.passthru.ServerIODispatch.onInputReady(ServerIODispatch.java:82)
at
org.apache.synapse.transport.passthru.ServerIODispatch.onInputReady(ServerIODispatch.java:39)
at
org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:113)
at
org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:159)
at
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:338)
at
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:316)
at
org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:277)
at
org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:105)
at
org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:586)
at java.lang.Thread.run(Thread.java:748) Caused by:
org.apache.http.ParseException: Invalid request line:
ÇÃ^þvHÅFmÉ
(#ë¸'º¯æ¦V
I made sure there were no outside connections possible and I also found the older threads of someone describing this problem, but their solution (changing something in the keystore) did not work.
Also, I made sure to include the SQS certificate in the container as well.
I have no connections setup to connect to the container so that will be out of the equation as well.
What am I missing here?
I have no idea why, but I have identified the culprit to be none other than Portainer. When I shutdown Portainer the stream of requests stops.
According to Wireshark, the requests are all made towards
GET
http://172.17.0.1:9000/api/endpoints/< containerID >/docker/< someId >/logs
It seems that because the WSO2 container I'm trying to run is an ESB that uses endpoints and returns 400 status codes on non-existing endpoints portainer will retry until it succeeds. This is just my observation, so I could be wrong.
I have confirmed my findings by uploading my container to AWS where the problem did not exist.

How can I split hyperledger fabcic peers on 2 instances?

I'm trying to modify the Getting Started example of Hyperledger fabric v1.0.
The source codes are in examples/e2e_cli directory.
The original scenario is 4 peers, 1 ordering service, and 1 cli service.
What I want to achieve is 3 peers, 1 ordering service, and 1 cli service on 1 cloud instance(instance A) and 1 peer on another instance(instance B).
Since blockchain is distributed ledger, I want to test it on multiple instances.
What I did was...
Start 3 peers and 1 cli and 1 ordering on instance A. I commented out peer2 section of docker-compose.yaml.
Start 1 peer on instance B. I copied peer2 section of the docker-compose.yaml and executed docker-compose -f only-peer2.yaml up
Follow the instruction, the "Manually execute the transactions", to create, and instantiate a channel and try to let peers join on the channel.
Here's the code I tried on cli to let the peer on instance B to join the channel.
peer2 on instance B Join channel:
CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer2/localMspConfig
CORE_PEER_ADDRESS=<IP address of instance B>:9051
CORE_PEER_LOCALMSPID="Org0MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer2/localMspConfig/cacerts/peerOrg1.pem
peer channel join -b mychannel.block
I just changed peer2 to IP address of instance B.
But I got errors and I could not let peer2 on instance B join the channel.
Here's error messages.
Instance A ( CLI ):
Error: Error getting endorser client channel: PEER_CONNECTIONERROR - Error trying to connect to local peer: grpc: timed out when dialing
Instance B ( peer2 ):
peer2 | 2017/04/01 22:56:32 grpc: Server.Serve failed to complete security handshake from "<IP address of instance A>:1177": EOF
peer2 | 2017/04/01 22:56:34 grpc: Server.Serve failed to complete security handshake from "<IP address of instance A>:1178": read tcp 172.19.0.2:7051->52.183.102.216:1178: read: connection reset by peer
Since I'm new to blockchain and Hyperledger in general, I may not understand basic of authentication mechanism. But I think having working example with 2 instances is great starting point.
Since I can go through the original scenario (which use only 1 instance) without any error, my server settings should by okay...
Please give me hints.
Thank you!
This question is a bit dated, as it refers to the getting started sample for the 1.0.0-alpha release. However, there are indeed resources for setting up a network on multiple hosts.
This one leverages Ansible, OpenStack and Kubernetes and is in the process of being merged into Hyperledger Cello. This one will deploy nodes to pre-provisioned VMs. There are likely other examples.

Hyperledger chaincode "register" vs "deploy"

I see the following ref post for "what-happened-exactly-on-chaincode-deploy-and-invoke-query-in-hyperledger".
For the "register" chaincode, is that the chaincode author has to register the chaincode to the ValidationPeer before other nodes can find the chaincode in the network and download the source to their local and build docker image? What happens if the same chaincode gets deployed multiple times afterwards, will it overwrite previous state?
Reference:
What happened exactly on chaincode deploy and invoke , query, in Hyperledger?
During “Deploy” the chain code is submitted to the ledger in a form of transaction and distributed to all nodes in the network. Each node creates new Docker container with this chaincode embedded. After that container will be started and Init method will be executed.
During “Query” - chain code will read the current state and send it back to user. This transaction is not saved in blockchain.
During “Invoke” - chaincode can modify the state of the variables in ledger. Each “Invoke” transaction will be added to the “block” in the ledger.
Haven’t seen the “register” feature on chaincode level. I can make an assumption (please correct me if I am wrong) that we are speaking about core API method “Registrar”
“Registrar” is used by chaincode’s author to login to the network via Validation or Non-Validation peer. In order to login author should confirm his identity by providing EnrolmentID and EnrolmentPassword. If ID and Password are correct, new Enrolment and Transaction certificates will be generated for this particular author.
From this moment author can deploy chaincode to the network. “Deploy” request will be send to one of peers. This peer will create “transaction” with information about “path to chaincode”, “init arguments”, “chaincode source code”. Then peer will calculate the HASH code for this transaction which looks like this:
a13c53fe822da398aaca7af59f064ae6f85c1d048fcb2ed77c3cacc137964a424deba679390df8d156e49c5fff7cdfc9fecec373a3cddd17e46ca9404096a52d
This hash will be used later as chaincode name.
VP keeps open connection to all other VPs in a network and can broadcast transaction to all of them (see consensus description for more details).
Every peer will use information from transaction to create local docker image required for deployed chaincode. Start new docker container, and execute Init method.
If you try to deploy the same chaincode again, Fabric will detect that chaincode with such name is already deployed and skip initialisation.
If you change anything in deploy request (path, arguments, any symbol in chaincode), peer will generate another hash and will deploy new chaincode (previous version will not be affected).
Technically REGISTER is part of the Deploy transaction
Chaincode provides Deploy, Invoke, Query methods/APIs to interact with it. When you talk about deploying a chiancode you are actually talking about an end user(or an application) calling the Deploy method on a chaincode.
Peer processing the Deploy transaction, starts the chaincode in a Docker container, but the Deploy transaction processing does not end here. There is a shim layer on chaincode container which actually communicates with the peer henceforth. This communication between shim layer and peer is governed by ChaincodeMessage. There are different types of ChaincodeMessage(s) as defined by this declaration:
enum Type {
UNDEFINED = 0;
REGISTER = 1;
REGISTERED = 2;
INIT = 3;
READY = 4;
TRANSACTION = 5;
COMPLETED = 6;
ERROR = 7;
GET_STATE = 8;
PUT_STATE = 9;
DEL_STATE = 10;
INVOKE_CHAINCODE = 11;
INVOKE_QUERY = 12;
RESPONSE = 13;
QUERY = 14;
QUERY_COMPLETED = 15;
QUERY_ERROR = 16;
RANGE_QUERY_STATE = 17;
}
Now quoting from the source http://hyperledger-fabric.readthedocs.io/en/latest/protocol-spec/#33-chaincode:
Upon deploy (chaincode container is started), the shim layer sends a one
time REGISTER message to the validating peer with the payload containing the
ChaincodeID. The validating peer responds with REGISTERED or ERROR on
success or failure respectively. The shim closes the connection and exits if
it receives an ERROR.
Hence you get
10:08:38.450 [shim] DEBU : Registering.. sending REGISTER
10:08:39.901 [shim] DEBU : []Received message REGISTERED from shim
10:08:39.965 [shim] DEBU : []Handling ChaincodeMessage of type: REGISTERED(state:created)
10:08:40.037 [shim] DEBU : Received REGISTERED, ready for invocations
after which your chaincode is actually ready to receive Query and Invoke.

MPI error due to Timeout in making connection to a remote process

I'm trying to run a NAS-UPC benchmark to study it's profile. UPC uses MPI to communicate with remote processes .
When I run the benchmark with 64 processes , i get the following error
upcrun -n 64 bt.C.64
"Timeout in making connection to remote process on <<machine name>>"
Can anybody tell me why this error occurs ?
this probably means that you're failing to spawn the remote processes - upcrun delegates that to a per-conduit mechanism, which may involve your scheduler (if any). my guess is that you're depending on ssh-type remote access, and that's failing, probably because you don't have keys, agent or host-based trust set up. can you ssh to your remote nodes without password? sane environment on the remote nodes (paths, etc)?
"upcrun -v" may illuminate the problem, even without resorting to the man page ;)

Resources