Hyperledger Composer Playground local files - docker

I'm a beginner with Hyperledger and I have some questions about Hyperledger Composer:
What's the difference between Hyperledger Composer and Composer Playground?
From what I understand Playground is just a user interface for the configuration, deployment and testing of a business network. So, isn't there any differences between deploy a business network with Playground and with Hyperledger Composer using Yeoman? (as shown for example in this tutorial)
I installed Composer Playgroung locally with this official tutorial. After creating a new business network, where can I find the related files on my machine?
What are all the operation I need to run every time I start up my machine to continue developing?
Sometimes just running ./startFabric.sh makes Playground return “Error trying to ping. Make sure chaincode has successfully instantiated and try again”
Do I have to export my business network card from Playground every time I want to test the RESTful API’s (using composer-rest-server)?

1: Hyperledger composer is a project which helps us interact with Hyperledger fabric. It includes UI (composer-playgroud), CLI and NPM(SDK) package. Composer Playground is a testing & development tool. You can create a blockchain smart contract here and also deploy in local memory to test the code. For production deployment, I would suggest use composer-cli.
2: Composer playground keeps all the cards in ".composer" folder. Most likely this folder sits on your user folder. In Ubuntu OS it is on path "/home/user/.composer". Regarding BNA, if it is connected to your fabric then it picks BNA from there. In browser only mode, it keeps in the browser cache.
3: I would suggest first run ./stopFabric.sh and then run ./startFabric.sh. It will stop all the Docker containers. If you have installed your own BNA then just ping the network. Follow this link. In the end, you will find the ping command.
4: Once you have imported cards to your composer rest server then, I think it should be okay. You do not need to import it again until unless the service is running. You can create composer rest server stateful by adding mongo in it. follow this. You can not import a card in two application i.e. if you have imported a card into composer playground then you can not import the same card into composer rest server.

Related

Live reload and two-way communication for Expo in a docker container under new local CLI

I'm using the "new" (SDK 46) project-scoped Expo CLI in a docker container. Basic flow is:
Dockerfile from node:latest runs the Expo npx project creation script, then copies in some app-specific files
CMD is npx expo start
Using docker-compose to create an instance of the above image with port 19000 mapped to local (on a Mac), and EXPO_PACKAGER_PROXY_URL set to my host local IP (see below). I've also mounted a network volume containing my components to the container to enable live edits on those source files.
If you google around, you'll see a few dozen examples of how to run Expo in a docker container (a practice I really believe should be more industry-standard to get better dev-time consistency). These all make reference to various environment variables used to map URLs correctly to the web-based console, etc.. However, as of the release of the new (non-global) CLI, these examples are all out of date.
Using the Expo Go app I've been able to successfully connect to Metro running on the container, after setting EXPO_PACKAGER_PROXY_URL such that the QR code showing up in the terminal directs the Go app to my host on 19000, and then through to the container.
What is not working is live reloading, or even reloading the app at all. To get a change reflected in the app I need to completely restart my container. For whatever reason, Metro does not push an update to the Go app when files are changed (although weirdly I do get a little note on Go saying "Refreshing..." which shows it knows a file has changed). Furthermore, it seems like a lot of the interaction between the app and the container console are also not happening, for example when the Go app loads the initial JS bundle, loading progress is not shown in the console as it is if I try running expo outside of Docker.
At this point my working theory is that this may have something to do with websockets not playing nicely with the container. Unfortunately Expo has so much wrapped under it that it's tough for me to figure out exactly why.
Given that I'm probably not the only one who will encounter this as more people adopt the new CLI and want a consistent dev environment, I'm hoping to crowdsource some debugging ideas to try to get this working!
(Additional note -- wanted to try using a tunnel to see if this fixes things, but ngrok is also quite a pain to get working correctly through docker, so really trying to avoid that if possible!)

Airflow on Google Cloud Composer vs Docker

I can't find much information on what the differences are in running Airflow on Google Cloud Composer vs Docker. I am trying to switch our data pipelines that are currently on Google Cloud Composer onto Docker to just run locally but am trying to conceptualize what the difference is.
Cloud Composer is a GCP managed service for Airflow. Composer runs in something known as a Composer environment, which runs on Google Kubernetes Engine cluster. It also makes use of various other GCP services such as:
Cloud SQL - stores the metadata associated with Airflow,
App Engine Flex - Airflow web server runs as an App Engine Flex application, which is protected using an Identity-Aware Proxy,
GCS bucket - in order to submit a pipeline to be scheduled and run on Composer, all that we need to do is to copy out Python code into a GCS bucket. Within that, it'll have a folder called DAGs. Any Python code uploaded into that folder is automatically going to be picked up and processed by Composer.
How Cloud Composer benefits?
Focus on your workflows, and let Composer manage the infrastructure (creating the workers, setting up the web server, the message brokers),
One-click to create a new Airflow environment,
Easy and controlled access to the Airflow Web UI,
Provide logging and monitoring metrics, and alert when your workflow is not running,
Integrate with all of Google Cloud services: Big Data, Machine Learning and so on. Run jobs elsewhere, i.e. other cloud provider (Amazon).
Of course you have to pay for the hosting service, but the cost is low compare to if you have to host a production airflow server on your own.
Airflow on-premise
DevOps work that need to be done: create a new server, manage Airflow installation, takes care of dependency and package management, check server health, scaling and security.
pull an Airflow image from a registry and creating the container
creating a volume that maps the directory on local machine where DAGs are held, and the locations where Airflow reads them on the container,
whenever you want to submit a DAG that needs to access GCP service, you need to take care of setting up credentials. Application's service account should be created and downloaded as a JSON file that contains the credentials. This JSON file must be linked into your docker container and the GOOGLE_APPLICATION_CREDENTIALS environment variable must contain the path to the JSON file inside the container.
To sum up, if you don’t want to deal with all of those DevOps problem, and instead just want to focus on your workflow, then Google Cloud composer is a great solution for you.
Additionally, I would like to share with you tutorials that set up Airflow with Docker and on GCP Cloud Composer.

Procedure to connect a composer to a personalized fabric

I'm having some challenges to actually understand the procedure to connect composer to fabric(not the samples).
My objective here is to configure a fabric network and then connect this configurated network with a composer.bna.
After making all the changes I want to in the network, I need to run the network with the docker composer commands correct ? just like the byfn.sh script?
After that I should generate a PeerAdmin card right? so i believe I should use the connection.json file and the composer-cli command or there is another way to do it ?
And then I can start the procedure via composer?
I'm just little confused because with fabric tools you have all those startFabric.sh and creatPeerAdmin.sh but some of them are different from de fabric-samples and well....I'm a real beginner on the subject but I just need more understanding on the procedure between a configurated network and composer.
once you've tested your personalized Fabric environment is up and running (you mention BYFN 2-Org [Fabric] blockchain network, which is a sample network provided by Hyperledger Fabric: this performs some chaincode tests, like: invokes and queries eg query the ledger after updating an asset: ie to ensure the scripted BYFN sample Fabric network is up and running OK). Hyperledger Composer, being a development framework and toolset (not a blockchain per se) is largely for writing smart contracts (ie business networks - and of course for writing client-side apps too) - and it 'consumes' that Fabric infrastructure and deploys smart contracts to it, in the form of chaincode that runs as native NodeJS chaincode.
Now to Composer: If you look at the 'Multi-Org' tutorial (how to interact with a business network / smart contract, between two Orgs, and participants from those orgs), it tells you what was needed to configure Composer to be able to interact with the blockchain network, which has TLS enabled etc etc. It includes defining Connection profiles (eg what are the nodes of the network, what ports, what config parameters, what's the defined Fabric endorsement policy for the business network etc etc), business network cards (cards provide the blockchain identities the ability to transact in that business network and know 'who' performed it), what Access Control rules, what Queries to run, What transaction logic and units of work to execute, to update the blockchain ledger. Composer is one way of developing your smart contracts, is model-driven and also aimed at taking away many of the transformatory, type-handling, validation aspects you would otherwise have to do. Your aim is to check your custom Fabric network is all running correctly (as mentioned earlier), then come to Composer, to configure the - Composer - elements, all of which are described in the Composer docs -> https://hyperledger.github.io/composer
in answer to your questions
You would need to ensure your docker-compose (not 'composer') YAML files reflect the custom Fabric network you want to spin up.
yes, you need someone with peer admin authority/capability in Fabric, to install / and someone with minimally channel admin authority/capability to start the business network on the channel (as you'll see in the multi-org tutorial, both these are done by a Composer business network card that happens to be called PeerAdmin)
correct
The tools you mention in your last paragraph, are Composer tools - the aim of those scripts is to allow a Composer developer to spin up a local, Development Fabric to test against. You won't find them in 'fabric-samples' because they are provided by Composer (composer-tools) as described here -> https://hyperledger.github.io/composer/latest/installing/development-tools.html .

Dockerizing composer-playground with deployed (embedded) business network archive

I found out there is hyperledger/composer-playground as a docker image. It's easily startable using
docker run --name composer-playground --publish 8080:8080 --detach hyperledger/composer-playground
Now I want to make a Dockerfile out of it that can serve an existing Business Network Definition as demo application. It should be embedded, so no real Fabric network is required. What possibilities do I have to accomplish that?
First idea: Card file structures could be copied into /home/composer/.composer/cards but as far as I understand, these cards could only have the embedded connection type, otherwise a real Fabric network is required.
Second idea: Is there some API endpoint that could be queried to create an embedded network for a .bna file?
Interesting idea, and with the direction of Composer playground cropping up a bit recently, it would be a good one to discuss on a Composer community call
As for how things are now, I think you'll have to set everything up with a real Fabric. I haven't seen a Dockerfile that does that but seems doable. The hosted playground does everything in local storage and pouch DB (indexedDB) so I don't think you would be able to get a demo bna in there without changes to the playground.
One thing that I had pondered in the past was making it possible to configure where the playground looks for sample networks, and that could even include the primary 'get started' network.
Might that help in this case? Could be worth opening a Github issue to explore the use cases if that does sound useful (pull requests gratefully accepted!)

Asset Management on Hyperledger Playground

I installed the playground hyperledger on my local ubuntu VM.
I would like to build an asset management systemt for my company on hyperledger.
Is the playground enough or do I need the fabric. How do I connect more nodes to the playground for redudancy if playground is enough.
Thanks in advance.
Playground is only for development of the chaincode, it runs and stores all data on the browser localStorage, you need the fabric network to deploy it, and is in this network where you define how many peers would exists in the network.

Resources