Could anyone kindly help me with this:
I understand that there is no "compiling" and "deploying of contracts in hyperledger sawtooth as such. I tried working with the tuna-chain supply chain and there it seemed like one command $docker-composer up did it all. But how exactly does that work?
I mean, say if I was making my own network on sawtooth, and I have written all the business logic (ie transaction processors), what are the docker files I need to make, how to make them?
The tuna supply chain code can be found here: https://github.com/hyperledger/education/tree/master/LFS171x/sawtooth-material
THANKS!
The analogy of contracts in sawtooth are called Transaction Processors (TPs). Sawtooth can be deployed to a native machine as executables or interpreted code, or deployed as docker images. Docker compose files allow for bringing up a network of docker images that may/may not interact with each other.
There are a number of language development kits for sawtooth. For information on developing sawtooth TPs you should read through and understand the architecture, components, API, etc.: https://sawtooth.hyperledger.org/docs/core/releases/latest/
There is also the github repo that is chock full of example TPs: https://github.com/hyperledger/sawtooth-core
As Frank rightly said, Sawtooth provides interfaces to write your smart contract in any language. You can define how the transaction will be defined and how will they be processed to change the state of blockchain.
These smart contracts will be executed by transaction processors, which depends on transaction families. All the validation part will be handled by validators, Once validation is done, validator will forward the contracts for a particular transaction family to their transaction processor.
All the serialization is done on Protobuf to save space and add speed.
There is a great example which will clear your concepts.
https://github.com/askmish/sawtooth-simplewallet
In order to understand above repo, you need to have clear understanding of Transaction families, Transaction processors, Apply method.
The post is old however if you have discovered a better solution please share, otherwise here is what I discovered.
You need to run the transaction processor and connect it to validator in order to submit your transactions.
In reality it will be rare to have all the validators on the same machine which is the case in most of the examples available on Hyperledger Sawtooth documentation for Docker, Ubuntu and AWS. In a real life scenario, on a business network companies will be running their own systems within their own networks with a couple of validators, settings-tp, rest-api and custom transaction processors. These validators will be connecting to other validators on the business network and that’s why only validator’s port is advised to open for world.
I wish there could be an easy way to register custom transaction processor on a running network possiby something like a cli similar to azure or aws, a native sawtooth cli that could connect to any sawtooth validator, upload transaction processor using a certificate and that transaction family become available for all future transactions.
Related
I have created a PoC application using hyperledger fabric for three organizations.
Most of the content on internet is instructing to use cloud for the deployment of the solution.
https://juarezjunior.medium.com/how-to-deploy-your-first-hyperledger-fabric-components-with-azure-kubernetes-service-aks-760563d9d543
https://medium.com/google-cloud/fabric-on-google-cloud-platform-97525323457c
https://www.youtube.com/watch?v=ubrA3W1JMk0
If peers, orderers,ca server of all organizations are deployed in a
cloud then how come hyperledger fabric is distribued?
Can this setup can be made in distributed , "in premise"
infrastructure?
Is there any source for reference/links to do this sort of setup?
any suggestions/references will be very helpful.
If peers, orderers,ca server of all organizations are deployed in a cloud then how come hyperledger fabric is distribued?
Each service (orderer, peer, etc) would be run on a different (virtual) machine within the same environment/cloud provider. Those machines could be distributed across various data-centers globally, or across different cloud providers, or even on many varying organisation's hardware. It makes relatively little difference, so long as they're not all on the same box, under the control of one organisation/person. Then it will be distributed.
Can this setup can be made in distributed , "in premise" infrastructure?
Yes - it can be set up to run however you like, although you may run into access issues, from firewalls, etc. Perhaps you'd have different departments run their own peers within a single organisation across many offices.
Is there any source for reference/links to do this sort of setup?
I believe most people start using the scripts from the hyperledger fabric codebase. Theres documentation here https://hyperledger-fabric.readthedocs.io/en/release-1.4/build_network.html about spinning up the "build your first network" script - byfn.sh (looks like this may have been removed)
https://github.com/hyperledger/fabric-samples has a ci folder. Within it there's some steps to build/run/test the codebase examples. Eg. https://github.com/hyperledger/fabric-samples/blob/main/ci/scripts/run-test-network-basic.sh
calls network.sh which is here: https://github.com/hyperledger/fabric-samples/blob/main/test-network/network.sh
If you really want to understand the necessary steps you'll need to work through it all. There's also some good documentation on what the various parts of the hyperledger-fabric system do here: https://github.com/hyperledger/fabric/tree/345a08b5cd30ef937a8270d1722462e4cad1caa9/docs/source - you'll need to navigate through the directories to the parts you're interested in and locate the .md files which contain the documentation : eg.
Peer docs/source/peers/peers.md
Orderer docs/source/orderer/ordering_service.md
Smart Contracts (aka Chaincode) - docs/source/smartcontract/smartcontract.md
Channels - docs/source/create_channel/channel_policies.md
You may also find some Dockerfiles in various repositories with different setups.
Understand that Interledger is an open protocol suite for sending payments across different ledgers -- supports and integrates with Bitcoin, Ethereum (IS 20022)
Does Interledger supports hyperledger and/or vice versa? i.e., any integration possibilities between Interledger and Hyperledger? i.e., hyperledger <-> interledger <-> ethereum and/or bitcoin?
Understand that Hyperledger does not have cryptocurrency but I might have digital assets within my hyperledger network that can be exchanged with ether or bitcoin.
Thus I wish to know if there are integration possibilities between Hyperledger and Interledger?
Interledger is a protocol, not a system so I would rephrase your question as:
I wish to know if there are integration possibilities between Hyperledger and other ledgers using Interledger?
The answer is yes but this depends also on the use case. What do you mean by "integration"?
Interledger defines some standards for distributed transaction execution according to a two-phase commit strategy. It is specifically well-suited to transfers of digital assets across multiple ledgers because it's resiliency is depends on the economic incentives of the intermediaries to claim the assets that have been transferred to them (and in so doing providing the key to the next intermediary to do the same).
The most important standard is the use of a SHA-256 hash in the prepare request and the pre-image of that hash as the commit trigger of the two phase asset transfer on each ledger. We call the hash a condition and the pre-image the fulfillment.
If you want to perform a transaction that transfers digital assets from a sender on one ledger to a receiver on another ledger then you will first establish a condition that can only be fulfilled by the receiver (i.e. only the receiver knows the pre-image).
This way you can involve an untrusted intermediary that will accept a transfer on the sender's ledger and make a corresponding transfer to the receiver on their ledger. Both transfers are prepared using the condition and when the receiver releases the fulfillment to their ledger the assets are transferred to them.
The intermediary will then have possession of the fulfillment (they observed the assets they transferred to the receiver being committed) and will use the same fulfillment to claim the assets transferred to them by the sender.
Any Hyperledger ledger that can be used to underwrite asset ownership and support this two-phase commit strategy can be used in an Interledger payment.
There are examples of writing smart contracts that do just this in Ethereum so I assume the same could be achieved using Fabric, Sawtooth or any other Hyperledger ledger.
I am not aware of any existing implementations of such a plugin that would allow the reference ILP connector to be run as an intermediary between a Hyperledger ledger and other ILP-compatible ledgers but I'd certainly welcome any efforts to build one and would be happy to assist.
Interledger looks to be a service that wants you to route financial transactions through them. There is some simple sample code for compatible wallets and transactions in JavaScript. Presumably you can do this in any language.
Which leads me to point out that Hyperledger supports smart contracts and applications written in Go, Java, Python, and JavaScript (through the Hyperledger Composer) and so there is a pretty good chance that you can implement an ISO 20022 / Interledger compatible data model and protocol.
HOWEVER
You need to follow best practices and smart contracts should never directly update external services as their is no way of rolling back external service changes if the smart contract sends successful external transactions but then fails for other reasons.
So, you need to design multi-stage transactions in your smart contracts and related applications. Applications will have to coordinate with smart contracts and post on their behalf to other services, recording results in the ledger and triggering next stage updates and transactions.
This allows the blockchain ledger to reflect the reality of external states from Interledger or whatever ISO 20022 compatible service you use.
This all presumes that the other financial institution refuses direct participation with the smart contract and hyperledger blockchain, which is always going to be more efficient, reliable, and secure.
It sounds like you want something like Hyperledger Quilt, which interoperates between different blockchain technologies.
So, let us consider a typical trade finance process flow. Exporter deploys a contract that has conditions of the shipment and a hash is generated once the deployment is finished.
Questions:
1) Where is the contract stored?
2) How other participants such as customs and importer can access this contract?
3) Can we activate participant level access to the contract on the blockchain?
There are several aspects to Ethereum and Hyperledger which make them quite different. Let me give a somewhat simplified answer to not get into too much details and a too long answer.
First of all, Ethereum is primarily a public blockchain that works in a certain intended way. Similarly, the Bitcoin blockchain works in a certain intended way. Hyperledger is not like that, rather it's an umbrella for distributed ledger technologies (in my terminology not the same as blockchain) which all aim to provide a very flexible architecture so that one can build all kinds of ledger-backed systems with pretty much any properties needed. One could compare this to an imaginary Bitcoin umbrella that provides technology to produce own altcoins with pluggable parts for e.g. consensus, blockchain storage, node composition etc. In short, all these aim to solve different problems and one should not think one will fit all.
Coming back to your questions.
1) Where is the contract stored?
Ethereum has contracts (called smart contracts) on the chain, i.e. code is compiled to byte code and the resulting bytes are sent within a transaction to be persisted onto the Ethereum blockchain. This is done once when you deploy the smart contract. After this one can interact with the smart contract with other transactions.
Hyperledger is in theory not defining this, it could be on a ledger or it might not. Take Fabric for instance, it deploys the code into a sandboxed Docker container which then can be interacted with using transactions.
2) How other participants such as customs and importer can access this contract?
Short answer is that they are given access via credentials.
This is in both Ethereum and Hyperledger open for you to decide yourself. We now assume the code in both cases has been deployed as code on the blockchain for Ethereum and as a Docker container in Fabric.
In Ethereum the code is, a bit simplified, publicly accessible/visible which means you need to employ some kind of check to only allow those who should be able to interact with the smart contract to do so. One way is to check the sender (of the transaction) and only allow certain ones. It's similar to traditional systems where one usually needs to authenticate/authorize to be allowed in and see/alter data.
In Hyperledger it would most likely be modelled in a similar manner and e.g. in Fabric there is also the Certificate Authority that hands out certificates that allow access to different parts of the system. E.g. transport, endorsement or transactions.
3) Can we activate participant level access to the contract on the blockchain?
Yes, so each participant in both systems has credentials and the designer of the smart contract can use this to control access.
Also, in Fabric there are channels that partition the ledger which is used for access control.
HTH.
1) The contract resides on the ledger. Whenever a transaction is invoked, the corresponding method in the contract gets executed on all the validating peers.
2) Other participants can access this contract using their pre-defined user credentials, which they can use to enroll themselves and invoke transactions on the contract.
3) Yes, we can activate participant level access to the contract by defining attributes for every user and allowing only those users who possess certain attributes to access specific parts of the contract.
This question is discussed many times but I'd like to hear some best practices and real-world examples of using each of the approaches below:
Designing containers which are able to check the health of dependent services. Simple script whait-for-it can be usefull for this kind of developing containers, but aren't suitable for more complex deployments. For instance, database could accept connections but migrations aren't applyied yet.
Make container able to post own status in Consul/etcd. All dependent services will poll certain endpoint which contains status of needed service. Looks nice but seems redundant, don't it?
Manage startup order of containers by external scheduler.
Which of the approaches above are preferable in context of absence/presence orchestrators like Swarm/Kubernetes/etc in delivery process ?
I can take a stab at the kubernetes perspective on those.
Designing containers which are able to check the health of dependent services. Simple script whait-for-it can be useful for this kind of developing containers, but aren't suitable for more complex deployments. For instance, database could accept connections but migrations aren't applied yet.
This sounds like you want to differentiate between liveness and readiness. Kubernetes allows for both types of probes for these, that you can use to check health and wait before serving any traffic.
Make container able to post own status in Consul/etcd. All dependent services will poll certain endpoint which contains status of needed service. Looks nice but seems redundant, don't it?
I agree. Having to maintain state separately is not preferred. However, in cases where it is absolutely necessary, if you really want to store the state of a resource, it is possible to use a third party resource.
Manage startup order of containers by external scheduler.
This seems tangential to the discussion mostly. However, Pet Sets, soon to be replaced by Stateful Sets in Kubernetes v1.5, give you deterministic order of initialization of pods. For containers on a single pod, there are init-containers which run serially and in order prior to running the main container.
Recently I was reading about concept of layers in FoundationDB. I like their idea, the decomposition of storage from one side and access to it from other.
There are some unclear points regarding implementation of the layers. Especially how they communicate with the storage engine. There are two possible answers: they are parts of server nodes and communicate with the storage by fast native API calls (e.g. as linked modules hosted in the server process) -OR- hosted inside client application and communicate through network protocol. For example, the SQL layer of many RDBMS is hosted on the server. And how are things with FoundationDB?
PS: These two cases are different from the performance view, especially when the clinent-server communication is high-latency.
To expand on what Eonil said: the answer rests on the distinction between two different sense of "client" and "server".
Layers are not run within the database server processes. They use the FDB client API to make requests of the database, and do not (with one exception*) get to pierce the transactional key-value abstraction.
However, there is nothing stopping your from running the layers on the same physical (or virtual) server machines as the database server processes. And, as that post from the community site mentions, there are use cases where you might very much wish to do this in order to minimize latencies.
*The exception is the Locality API, which is mostly useful in exactly those cases where you want to co-locate client-side layers with the data on which they operate.
Layers are on top of client-side library feature.
Cited from http://community.foundationdb.com/questions/153/what-layers-do-you-want-to-see-first
That's a good question. One reason that it doesn't always make sense
to run layers on the server is that in a distributed database, that
data is scattered--the servers themselves are a network hop away from
a random piece of data, just like the client.
Of course, for something like an analytics layer which is aware of
what data each server contains, it makes sense to run a distributed
version co-located with each of the machines in the FDB cluster.