Is it possible to see the values returned by Hyperledger Composer transaction Processor in Browser - hyperledger

Is there any way to see values returned by Hyperledger composer transaction processor from Hyperledger composer playground in browser.
I tried enabling logs in chrome browser but unable to see values returned from transaction processor.
I am debugging composer business network in online playground , so looking to understand if i could see values returned from transaction processor function. I am able to see statements printed via console.log() in browser but unable to catch hold of values returned by processor.
* Transaction Created to add the new property in the system
* #param {org.example.property.example} tx
* #returns {string}
* #transaction
*/
async function example(tx) {
return 'hello world!';
}

Runtime(Local) Log :
If you run docker ps -a you should see basic running docker containers. One of the docker containers will be a chaincode container (your running business network). You will see particular container id of container.
Then look forward using docker logs <container id> you will see your console.log() information and transaction return information in your terminal.
Developer log :
you can see the output in the Developer console. In Firefox and Chrome browsers for example - hit CTRL-SHIFT-I and it pops up - then go to Console.and you can see what your console log information.

Related

Understanding Docker Container Internals in Hyperledger Fabric

I think to understood how fabric mainly works and how consens is reached. What I am still missing in the documentation is the part of what happens inside of a docker container of fabric to take part in communication process.
So, communication starting from a client (e.g. an app) takes place in using gRPC messages between peers and orderer.
But what happens inside of the containers?
I imagine it for myself as a process that is only receiving gRPC message and answering them in using functions in the background of a peer/orderer, to hands out its response for further processing in another unit like the client to collect the responses of multiple peers for a smart contract.
But what happens really inside a container? I mean, a container spawns, when the docker image file is loaded and launched by the yaml config file. But what is started there inside of it (is there only a single peer binary started, e.g. like the command "peer node start") - I mean the compiled go binary file "peer" only?? What is listening? What is responding there? I discovered only one port for every container that is exposed out. This seems for me to be the gate for gRPC (cause it is often used as Port ID: **51).
The same questions goes for the orderer, the chaincode and the cli. How are they talking to each other or is gRPC the only way of communication and processing (excluded of the discovery service and gossip, how is this started inside of the containers (in using the yaml files for lauchun only or is there further internal configuration or a startupscript in the image files (cause I cannot look inside the images, only login on running containers while runtime).
When your client sends request to one of the peers, peer instance checks if requested chaincode (CC) installed on it. If CC not installed: Obviously you'll get an error.
If CC is installed: Peer checks if a dedicated container is already started for the given CC and corresponding version. If container is started, peer sends transaction request to that CC instance and returns back the response to your client after signing the transaction. Signing guarantees that response is really sent by that peer.
If container not started:
It builds a docker image and starts that instance (docker container). New image would be based on one of the hyperledger images. i.e. if your CC is GO, then hyperledger/baseos, which is very basic linux os, will be used. This new image contains CC binary and META-DATA as well.
That peer instance is using underlying (your) machine's docker server to do all of those. That's the reason why we need to pass /var/run:/host/var/run into volume mapping and CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock into environment variables.
Once the CC container starts, it connects to its parent peer node which is defined with
CORE_PEER_CHAINCODEADDRESS attribute. Peer dictates to child (probably during image creation) to use this address, so they obey. Peer node defines its own listen URL with CORE_PEER_CHAINCODELISTENADDRESS attribute.
About your last question; communication is with gRPC in between nodes also with clients. If TLS is enabled, then it's for sure secure communication. Entry point for orderers to know about peers and peers know about other organizations' peers is the definition of anchor peers defined during channel creation. Discovery service is running in peer nodes, so they can hold a close to real-time network layout. Discovery service also provides peers' identity, that's how clients can detect other organizations' peers when endorsement policy requires multiple organizations' endorsement policy (i.e. if policy look like AND(Org1MSP.member, Org2MSP.member)).

kafka connect in distributed mode is not generating logs specified via log4j properties

I have been using Kafka Connect in my work setup for a while now and it works perfectly fine.
Recently I thought of dabbling with few connectors of my own in my docker based kafka cluser with just one broker (ubuntu:18.04 with kafka installed) and a separate node acting as client for deploying connector apps.
Here is the problem:
Once my broker is up and running, I login to the client node (with no broker running,just the vanilla kafka installation), i setup the class path to point to my connector libraries. Also the KAFKA_LOG4J_OPTS environment variable to point to the location of log file to generate with debug mode enabled.
So every time i start the kafka worker using command:
nohup /opt//bin/connect-distributed /opt//config/connect-distributed.properties > /dev/null 2>&1 &
the connector starts running, but I don't see the log file getting generated.
I have tried several changes but nothing works out.
QUESTIONS:
Does this mean that connect-distributed.sh doesn't generate the log file after reading the variable
KAFKA_LOG4J_OPTS? and if it does, could someone explain how?
NOTE:
(I have already debugged the connect-distributed.sh script and tried the options where daemon mode is included and not included, by default if KAFKA_LOG4J_OPTS is not provided, it uses the connect-log4j.properties file in config directory, but even then no log file is getting generated).
OBSERVATION:
Only when I start zookeeper/broker on the client node, then provided KAFKA_LOG4J_OPTS value is picked and logs start getting generated but nothing related to the Kafka connector. I have already verified the connectivity b/w the client and the broker using kafkacat
The interesting part is:
The same process i follow in my workpalce and logs start getting generated every time the worker (connnect-distributed.sh) is started, but I haven't' been to replicate the behaviors in my own setup). And I have no clue what I am missing here.
Could someone provide some reasoning, this is really driving me mad.

Why isn't Stackdriver capturing log levels from Slf4j logs from Compute Engine Instance running a Docker Image?

I currently have a gradle Spring Boot app running as a Docker image in a GCP Compute Engine instance. In my Application I added the lombok #Slf4j annotation and in the main method I added the line log.info("Hello world"); and ran the image in my GCE instance via docker run -d --rm -it -p 8888:8080 {image} and checked the Stackdriver logs.
I would expect to be able to filter via log level (INFO, WARNING, etc.), but it seems that the logs are not mapping the log level appropriately, meaning they only show up when the "log level: Any" filter is chosen.
The above log.info() statement shows up in Stackdriver as so:
[2m2019-10-01 17:55:41.159[0;39m [32m INFO[0;39m [35m1[0;39m [2m---[0;39m [2m[nio-8080-exec-5][0;39m [36mc.g.o.Application [0;39m [2m:[0;39m Hello world
with the Json payload:
jsonPayload: {
container: {}
instance: {}
message: "[2m2019-10-01 17:55:41.159[0;39m [32m INFO[0;39m [35m1[0;39m [2m---[0;39m [2m[nio-8080-exec-5][0;39m [36mc.g.o.Application [0;39m [2m:[0;39m Hello world" }
and "logname" is projects/my-project/logs/gcplogs-docker-driver.
Why isn't Stackdriver capturing the log levels from Slf4j even though gcplogs-docker-driver is being used?
It looks like Docker's gcplogs-docker-driver causes the output to be sent to GCP's Stackdriver Logging (aka Cloud Logging). The gcplogs driver just sends each input line as-is with no further processing. There doesn't seem to be any appetite in docker/moby to do additional processing such as attempting to extract severities.
You might be able to do some after-the-fact labelling, but I've never tried.
Note that some platforms support performing additional processing before submitting the entries to Stackdriver Logging. For example, GKE logs console output using the Stackdriver Logging Agent which supports structured logs, which are JSON-encoded payloads. Or you might be able to configure your application's logging framework to log directly to Stackdriver Logging.

How do I start Eclipse-Hono client in MQTT?

I'm having trouble in starting MQTT Client in Eclipse Hono.
I'm using The following command to start the client
java -jar hono-example-0.6-exec.jar --hono.client.host=hono.eclipse.org --hono.client.port=15672 --hono.client.username=consumer#HONO --hono.client.password=verysecret --spring.profiles.active=receiver --tenant.id=bob
which starts the client accepting telemetry data produced by the device, but didn't catch the data published through MQTT.
What may be wrong with this approach?
The command you are using does not start an MQTT client but starts the receiver for consuming (AMQP 1.0) messages from devices belonging to tenant bob. In order to see something happening, you need to have a device that belongs to tenant bob publish some data. If you want to use MQTT for that purpose you may want to use the mosquitto_pub command line client as described in the Getting Started guide. However, make sure that you use the correct username and password. From what I can see in the device registry on hono.eclipse.org you have registered a device with id 1112 and auth-id sensor1. So the command to publish should look something like:
mosquitto_pub -h hono.eclipse.org -u sensor1#bob -Pthepasswordyouregistered -t telemetry -m "hello"
Again, make sure to replace thepasswordyouregistered with the real password that you have registered for device 1112.

How to add health check for python code in docker container

I have just started exploring Health Check feature in docker. All the tutorials online are showing same type of health check examples. Like this link1 link2. They are using this same command:
HEALTHCHECK CMD curl --fail http://localhost:3000/ || exit 1
I have a python code which I have converted into docker image and its container is running fine. I have service in container which runs fine but I want to put a health check on this service. It is started/stopped using :
service <myservice> start
service <myservice> stop
This service is responsible to send data to server. I need to put a health check on this but don't know how to do it. I have searched for this and didn't found any examples. Can anyone please point me to the right link or can explain it.?
Thanks
The health check command is not something magical, but rather something you can automate to get a better status on your service.
Some questions you should ask yourself before setting the healthcheck:
How would i normally verify that the service is running ok, assuming i'm running it normally instead of inside of a container and it's not an automated process, but rather i check the status doing something myself
If the service has no open ports it can be interrogated on, does it rather write it's success/failure status on disk inside a file?
If the service has open ports but it communicates on a custom protocol, do i have any tools that i use to interrogate the open ports
Let's take the curl command you listed: It implies that the healthcheck listed is monitoring a http service started on port 3000. The curl command will fail if the http status code returned is not 200. That's pretty straight forward to demonstrate the health check usage.
Assuming you write success or failure to a file every 30 seconds from your service then your healthcheck would be a script that exits abnormally when encountering the failure text
Assuming that your service has an open port but is communicating via some custom protocol like protocol buffers, then all you have to do is call it with a script that encodes a payload with proto buf then checks the output received
And so on...

Resources