I have the same doubt as this question
Getting timestamps in deterministic way in Hyperledger Composer transactions
My Question is will it not lead to non deterministic code.
For instance if I have more than one peer, Wouldn't the timetamp method return different values?
Related
Could anyone kindly help me with this:
I understand that there is no "compiling" and "deploying of contracts in hyperledger sawtooth as such. I tried working with the tuna-chain supply chain and there it seemed like one command $docker-composer up did it all. But how exactly does that work?
I mean, say if I was making my own network on sawtooth, and I have written all the business logic (ie transaction processors), what are the docker files I need to make, how to make them?
The tuna supply chain code can be found here: https://github.com/hyperledger/education/tree/master/LFS171x/sawtooth-material
THANKS!
The analogy of contracts in sawtooth are called Transaction Processors (TPs). Sawtooth can be deployed to a native machine as executables or interpreted code, or deployed as docker images. Docker compose files allow for bringing up a network of docker images that may/may not interact with each other.
There are a number of language development kits for sawtooth. For information on developing sawtooth TPs you should read through and understand the architecture, components, API, etc.: https://sawtooth.hyperledger.org/docs/core/releases/latest/
There is also the github repo that is chock full of example TPs: https://github.com/hyperledger/sawtooth-core
As Frank rightly said, Sawtooth provides interfaces to write your smart contract in any language. You can define how the transaction will be defined and how will they be processed to change the state of blockchain.
These smart contracts will be executed by transaction processors, which depends on transaction families. All the validation part will be handled by validators, Once validation is done, validator will forward the contracts for a particular transaction family to their transaction processor.
All the serialization is done on Protobuf to save space and add speed.
There is a great example which will clear your concepts.
https://github.com/askmish/sawtooth-simplewallet
In order to understand above repo, you need to have clear understanding of Transaction families, Transaction processors, Apply method.
The post is old however if you have discovered a better solution please share, otherwise here is what I discovered.
You need to run the transaction processor and connect it to validator in order to submit your transactions.
In reality it will be rare to have all the validators on the same machine which is the case in most of the examples available on Hyperledger Sawtooth documentation for Docker, Ubuntu and AWS. In a real life scenario, on a business network companies will be running their own systems within their own networks with a couple of validators, settings-tp, rest-api and custom transaction processors. These validators will be connecting to other validators on the business network and that’s why only validator’s port is advised to open for world.
I wish there could be an easy way to register custom transaction processor on a running network possiby something like a cli similar to azure or aws, a native sawtooth cli that could connect to any sawtooth validator, upload transaction processor using a certificate and that transaction family become available for all future transactions.
I am trying to understand how the "transaction mempool" works in Hyperledger. I am mainly looking at the documentation here: http://hyperledger-fabric.readthedocs.io/en/release-1.1/peers/peers.html#peers-and-orderers
I know how bitcoin works and I am thinking in 'bitcoin' terms (hence the word 'mempool')
So as I understand it, in hyperledger there are 3 parties: application, peers and orderers. All parties have permission credentials from the MSP. The application submitting a transaction first needs to aquire a sufficient number of endorsements from a number of peers. After it appends to the transaction these endorsements, it sends it to an orderer that puts it in its 'mempool'.
In the documentation it clearly states that forks can't happen, and if a transaction is included in a block is final.
My question is: after the application receives the endorsements and sends the transactions to an orderer, how can we be sure that it doesn't send it to another orderer? And what would happen if two different orderers had the same transaction in their memory (before posting the relevant block)?
There is no concept of mempool in Hyperledger Fabric. Ideally in a production environment, all transactions gets written to a Crash Fault Tolerant Kafka cluster, which gives a single view of all the transactions to all the ordering service nodes. Orderers read back from the Kafka to cut blocks of Transactions, they do not send it to other orderers.
You can read more about it in my answer here: Transactions order in a channel with multiple Orderers
Understand that Interledger is an open protocol suite for sending payments across different ledgers -- supports and integrates with Bitcoin, Ethereum (IS 20022)
Does Interledger supports hyperledger and/or vice versa? i.e., any integration possibilities between Interledger and Hyperledger? i.e., hyperledger <-> interledger <-> ethereum and/or bitcoin?
Understand that Hyperledger does not have cryptocurrency but I might have digital assets within my hyperledger network that can be exchanged with ether or bitcoin.
Thus I wish to know if there are integration possibilities between Hyperledger and Interledger?
Interledger is a protocol, not a system so I would rephrase your question as:
I wish to know if there are integration possibilities between Hyperledger and other ledgers using Interledger?
The answer is yes but this depends also on the use case. What do you mean by "integration"?
Interledger defines some standards for distributed transaction execution according to a two-phase commit strategy. It is specifically well-suited to transfers of digital assets across multiple ledgers because it's resiliency is depends on the economic incentives of the intermediaries to claim the assets that have been transferred to them (and in so doing providing the key to the next intermediary to do the same).
The most important standard is the use of a SHA-256 hash in the prepare request and the pre-image of that hash as the commit trigger of the two phase asset transfer on each ledger. We call the hash a condition and the pre-image the fulfillment.
If you want to perform a transaction that transfers digital assets from a sender on one ledger to a receiver on another ledger then you will first establish a condition that can only be fulfilled by the receiver (i.e. only the receiver knows the pre-image).
This way you can involve an untrusted intermediary that will accept a transfer on the sender's ledger and make a corresponding transfer to the receiver on their ledger. Both transfers are prepared using the condition and when the receiver releases the fulfillment to their ledger the assets are transferred to them.
The intermediary will then have possession of the fulfillment (they observed the assets they transferred to the receiver being committed) and will use the same fulfillment to claim the assets transferred to them by the sender.
Any Hyperledger ledger that can be used to underwrite asset ownership and support this two-phase commit strategy can be used in an Interledger payment.
There are examples of writing smart contracts that do just this in Ethereum so I assume the same could be achieved using Fabric, Sawtooth or any other Hyperledger ledger.
I am not aware of any existing implementations of such a plugin that would allow the reference ILP connector to be run as an intermediary between a Hyperledger ledger and other ILP-compatible ledgers but I'd certainly welcome any efforts to build one and would be happy to assist.
Interledger looks to be a service that wants you to route financial transactions through them. There is some simple sample code for compatible wallets and transactions in JavaScript. Presumably you can do this in any language.
Which leads me to point out that Hyperledger supports smart contracts and applications written in Go, Java, Python, and JavaScript (through the Hyperledger Composer) and so there is a pretty good chance that you can implement an ISO 20022 / Interledger compatible data model and protocol.
HOWEVER
You need to follow best practices and smart contracts should never directly update external services as their is no way of rolling back external service changes if the smart contract sends successful external transactions but then fails for other reasons.
So, you need to design multi-stage transactions in your smart contracts and related applications. Applications will have to coordinate with smart contracts and post on their behalf to other services, recording results in the ledger and triggering next stage updates and transactions.
This allows the blockchain ledger to reflect the reality of external states from Interledger or whatever ISO 20022 compatible service you use.
This all presumes that the other financial institution refuses direct participation with the smart contract and hyperledger blockchain, which is always going to be more efficient, reliable, and secure.
It sounds like you want something like Hyperledger Quilt, which interoperates between different blockchain technologies.
I wish to implement a contract that is subject to market data which each user has access to in their own LAN, but which they are not licensed to share over the internet. I understand that chaincode is supposed to be deterministic. Does this mean that it is not designed to tolerate referencing out-of-band data (data not available in the log or state) - so it would be hazardous in this protocol to reference this market data from chaincode?
Hyperledger Fabric (version 1.0) gives you the chance to create your own distributed networks, via channels. When you create a channel, you decide which are the participants of it, and you isolate them from the rest of the network. Then, you deploy, instantiate and invoke your chaincode, via that channel. So that, you don't share that chaincode and the transactions with all the people/network.
When you execute a transaction, you do it using some parameters. You would define it in your chaincode. You decide whether your chaincode could get parameters or not.
I have found many different questions in your question. Could you specify more wich is your issue?!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I need some suggestion for the erlang in-memory cache system.
The cache item is key-value based storage.
key is usually an ASCII string; value is erlang's types include number / list / tuple / etc.
The cache item can be set by any of the node.
The cache item can be get by any of the node.
The cache item is shared cross all nodes even on different servers
dirty-read is permitted, I don't want any lock or transaction to reduce the performance.
Totally distributed, no centralized machine or service.
Good performance
Easy install and deployment and configuration and maintenance
First choice seems to me is mnesia, but I have no experence on it.
Does it meet my requirement?
How the performance can I expect?
Another option is memcached --
But I am afraid the performance is lower than mnesia because extra serialization/deserialization are performed as memcached daemon is from another OS process.
Yes. Mnesia meets your requirements. However, like you said, a tool is good when the one using it understands it in depth. We have used mnesia on a distributed authentication system and we have not experienced any problem thus far. When mnesia is used as a cache it is better off than memcached, for one reason "Memcached cannot guarantee that what you write, you can read at any time, due to memory swap out issues and stuff" (follow here). However, this means that your distributed system is going to be built over Erlang. Indeed mnesia in your case beats most NoSQL cache solutions because their systems are Eventually consistent. Mnesia is consistent, as long as network availability can be ensured across the cluster. For a distributed cache system, you dont want a situation where you read different values for the same key from different nodes, hence mnesia's consistency comes in handy here. Something you should think about, is that, it is possible to have a centralised Memory cache for a distributed system. This works like this: You have RABBITMQ server running and accessible by AMQP clients on each Cluster node. Systems interact over the AMQP interface. Because, the cache is centralised, consistency is ensured by the process/system responsible for writing and reading from the cache. The other systems just place a request for a key, onto the AMQP message bus, and the system responsible for cache receives this message and replies it with the value.
We have used the Message bus Architecture using RABBITMQ for a recent system which involved integration with banking systems, an ERP system and Public online service. What we built was responsible for fusing all these together and we are glad that we used RABBITMQ. The details are many but what we did is to come up with a message format, and a system identification mechanism. All systems must have a RABBITMQ client for writing and reading from the message bus. Then you would create a read Queue for each system, so that other system write their requests into that queue, whose name inside RABBITMQ, is the same as the system owning it. Then, later, you must encrypt the messages passing over the bus. In the end, you have systems bound together over large distance/across states, but with an efficient network, you wont believe how fast RABBITMQ binds these systems. Anyhow, RABBITMQ can also be clustered, and i should tell you that it is Mnesia which powers RABBITMQ (that tells you how good mnesia can be).
Another thing is that, you should do some reading and write many programs until you are comfortable with it.