Is hyper ledger fabric considered as centralized blochain - hyperledger

Hyper ledger has some classic/old world mechanisms that brings up the question, is it really decentralized?
Having a REST server to communicate with the blockchain brings up the cloud model behavior.
Even though the hyper ledger is distributed, someone calling a rest API will may be written to the server logs with some data such as IP address, GEO info and more.
So, is hyper ledger fabric considered as centralized blockchain or maybe decentralized blockchain?
Thanks

If you mean decentralised as in not controlled by any one entity and anonymous then no, it is not that.
Fabric is the backbone of blockchain applications and certain things are plug and play. it is not meant to be anonymous, it is meant to be controlled and only known parties have access to anything.
It's simply an eco-system which allows anyone to build blockchain based applications. Imagine a system where your bank holds your money and you want to pay someone. The bank needs to make sure that you are who you say you are and no one else can authorise payments from your account. That's what the permissions mean. It's not meant to bypass the man in the middle like bitcoin or other cryptocurrencies are. Those however are just an implementation of a block chain system, they are not the only way you can use such a system though.
The immutability of the ledger offers certain advantages. Imagine an audit system where every action is recorded and can't be changed. If your audit records are in a sql database for example, anyone with access can go in, change or delete that data. What goes on the ledger stays there for ever and can't be change. This doesn't mean that your asset data cannot change. That is a fundamental thing to understand. Underlying data can be changed via a new transaction against the same asset, but the history of the asset is clearly visible and can't be modified.
In this world, you build a something, someone controls it, gives access to other organisations and people within those and every action has its source identified.
It is decentralised in the sense that the ledger does not live in one place only, a copy of the ledger exists on every peer that is joined to a channel.
However, it is not meant to be anonymous, all the participants are known and their access level controlled, that's the whole point.

Related

Where does raw geoip data come from?

This question is a general version of a more specific question asked here. However, those answers were unusable.
Question: What is the raw source for geoIP data?
Many websites will tell me where my IP is, but they all appear to be using databases from fewer than 5 companies (most are using a database from MaxMind). These companies offer limited free versions of their databases, but I'm trying to determine what they're using for their source data?
I've tried using Linux/Unix commands such as ping, traceroute, dig, whois, etc., but they don't provide predictably accurate information.
Preamble: I believe this is actually a very valid question for SO website as understanding how such things work is important to understanding how such datasets can be used in software. However the answer to this question is rather complex and full of historical remarks.
First - it is worth mentioning that there is NO unified raw geoip data. Such thing just does not exist. Second - the data for this comes from multiple resources and often is not reliable and/or outdated.
To understand how that comes to be one need to know how Internet came into existence and spread around the world. Short summary is below:
IANA is a global [non-profit] organization which manages assignment of IP blocks to regional organizations: https://www.iana.org/numbers This happens upon request and regional organization requests specified block size
Regional organizations may assign those IP blocks to either ISP directly or to country level sub-organizations (who would assign that to ISP then).
ISP assigns IP addresses to local branches etc.
From above you can easily see that:
There is no single body which is responsible for IP block assignment to this or that location
Decisions how to (and whether to) release information about which IP belongs to which location are not taken uniformly and instead each organizations decides how to (and whether do it at all) release that information
All of above creates a whole lot of mess. It takes a lot of dedication and long time to obtain, aggregate and sort this data. And this is why most up-to-date and detailed geoip datasets are commercial commodity.
Whoever takes on a challenge of building their own dataset should be able to obtain this information directly from end users (ISPs), because higher level organizations do not know to which location each IP address will be assigned. Higher level organizations only distribute IP blocks among applicants (and keep some reserve for faster processing) and it is a lowest level organizations who decide which location gets which IP address and they are not obligated to release this information publicly.
UPD:
To start building your own dataset you can begin with this list of blocks and how they are assigned

Impact of Fiori Apps on Server

We are following Embedded Architecture for our S4HANA 1610 system.
Please let me know what will be the impact on Server if we implement 200+ Standard Fiori Apps in our System ?
Regards,
Sayed
When you say “server”, are you referring to the ABAP backend, consisting of one or more SAP application servers and usually one database server?
In this case, you might get a very first impression using transaction ST03.
Here, you get a detailed analysis of resource consumption on the SAP application server.
You also get information about database access times, as seen from the application server.
This can give you a good hint about resource consumption on the database server.
Usually, the ABAP backend is accessed from Fiori via OData calls.
Not every user interaction causes an OData call, some interactions are handled locally at the frontend.
In general, implemented apps only require some space on the hard disk, as long as nobody is using them.
So the important questions for defining the expected workload are:
How many users are working with these apps in which frequency (Avg.
thinktime)?
How many OData calls are sent from these apps to the backend and how
many dialog steps are handled by the frontend itself?
How expensive are these OData calls (see ST03)?
Every app reflects one or more typical business processes, which need to be defined.
Your specific Customizing also plays an important role, because it controls different internal functionality.
It’s also mandatory, to optimize database access, because in productive use, tables might get bigger in size, which might slow down database access over time.
Usually, this kind of sizing is done by SAP Hardware and Technology partners.

Where do smart contracts reside in blockchain (Ethereum or Hyperledger)

So, let us consider a typical trade finance process flow. Exporter deploys a contract that has conditions of the shipment and a hash is generated once the deployment is finished.
Questions:
1) Where is the contract stored?
2) How other participants such as customs and importer can access this contract?
3) Can we activate participant level access to the contract on the blockchain?
There are several aspects to Ethereum and Hyperledger which make them quite different. Let me give a somewhat simplified answer to not get into too much details and a too long answer.
First of all, Ethereum is primarily a public blockchain that works in a certain intended way. Similarly, the Bitcoin blockchain works in a certain intended way. Hyperledger is not like that, rather it's an umbrella for distributed ledger technologies (in my terminology not the same as blockchain) which all aim to provide a very flexible architecture so that one can build all kinds of ledger-backed systems with pretty much any properties needed. One could compare this to an imaginary Bitcoin umbrella that provides technology to produce own altcoins with pluggable parts for e.g. consensus, blockchain storage, node composition etc. In short, all these aim to solve different problems and one should not think one will fit all.
Coming back to your questions.
1) Where is the contract stored?
Ethereum has contracts (called smart contracts) on the chain, i.e. code is compiled to byte code and the resulting bytes are sent within a transaction to be persisted onto the Ethereum blockchain. This is done once when you deploy the smart contract. After this one can interact with the smart contract with other transactions.
Hyperledger is in theory not defining this, it could be on a ledger or it might not. Take Fabric for instance, it deploys the code into a sandboxed Docker container which then can be interacted with using transactions.
2) How other participants such as customs and importer can access this contract?
Short answer is that they are given access via credentials.
This is in both Ethereum and Hyperledger open for you to decide yourself. We now assume the code in both cases has been deployed as code on the blockchain for Ethereum and as a Docker container in Fabric.
In Ethereum the code is, a bit simplified, publicly accessible/visible which means you need to employ some kind of check to only allow those who should be able to interact with the smart contract to do so. One way is to check the sender (of the transaction) and only allow certain ones. It's similar to traditional systems where one usually needs to authenticate/authorize to be allowed in and see/alter data.
In Hyperledger it would most likely be modelled in a similar manner and e.g. in Fabric there is also the Certificate Authority that hands out certificates that allow access to different parts of the system. E.g. transport, endorsement or transactions.
3) Can we activate participant level access to the contract on the blockchain?
Yes, so each participant in both systems has credentials and the designer of the smart contract can use this to control access.
Also, in Fabric there are channels that partition the ledger which is used for access control.
HTH.
1) The contract resides on the ledger. Whenever a transaction is invoked, the corresponding method in the contract gets executed on all the validating peers.
2) Other participants can access this contract using their pre-defined user credentials, which they can use to enroll themselves and invoke transactions on the contract.
3) Yes, we can activate participant level access to the contract by defining attributes for every user and allowing only those users who possess certain attributes to access specific parts of the contract.

Printing from one Client to another Client via the Server

I don't know if it sounds crazy, but here's the scenario -
I need to print a document over the internet. My pc ClientX initiates the process using the web browser to access a ServerY on the internet and the printer is connected to a ClientZ (may be yours).
1. The document is stored on ServerY.
2. ClientZ is purely a cliet; no IIS, no print server etc.
3. I have the specific details of ClientZ, IP, Port, etc.
4. It'll be completely a server side application (and no client-side on ClientZ) with ASP.NET & C#
- so, is it possible? If yes, please give some clue. Thanks advanced.
This is kind of to big of a question for SO but basically what you need to do is
upload files to the server -- trivial
do some stuff to figure out if they are allowed to print the document -- trivial to hard depending on scope
add items to a queue for printing and associate them with a user/session -- easy
render and print the document -- trivial to hard depending on scope
notify the user that the document has been printed
handling errors
the big unknowns here are scope, if this is for a school project you probably don't have to worry about billing or queue priority in step two. If its for a commercial product billing can be a significant subsystem in its self.
the difficulty in step 4 depends directly on what formats you are going to support as many formats are going to require document specific libraries or applications. There are also security considerations here if this is a commercial product since it isn't safe to try to render all types of files.
Notifications can be easy or hard depending on how you want to do it. You can post back to the html page, but depending on how long its going to take for a job to complete it might be nice to have an email option as well.
You also need to think about errors. What is going to happen when paper or toner runs out or when someone tries to print something on A4 paper? Someone has to be notified so that jobs don't just build up.
On the server I would run just the user interaction piece on the web and have a "print daemon" running as a service to manage getting the documents printed and monitoring their status. I would use WCF to do IPC between the two.
Within the print daemon you are going to need a set of components to print different kinds of documents. I would make one assembly per type (or cluster of types) and load them into your service as plugins using MEF.
sorry this is so general, but you are asking a pretty general and difficult to answer question.

How do you build a torrent file indexer?

I am curious about the technology behind a search engine like torrentz.com. From what I could observe, it doesn't host any torrent files, but rather connects you to other servers that do.
you search for keywords, it brings up a list of potential titles matching your search.
then you pick one of these and it provides you with another list of potential servers hosting the corresponding torrent file.
What I'm interested in particularly is the strategy behind gathering and indexing all that content:
How do they collect then aggregate the data?
Is it a submission base service, where each of these servers submits its content for indexing?
Is it a crawling algorithm? If so how do you even start crawling a site like piratebay.org?
Do they have access to these other servers' databases?
My knowledge and understanding of the bittorrent protocol is not very elaborate, but the documentation that I found online pointed me more toward the processes involved in building a tracker service, which isn't exactly what I'm interested in. Any insight and recommended reading material is appreciated.
For beginning start indexing their rss feeds and gather data from it. The next step would be indexing of portal's (like Mininova, tpb, etc) pages but watch out for the fact that you can be banned (ip based) for doing so, since that would provoke huge amount of data requested from their servers (i don't think that they be too happy about that)..
That said i doubt that they have access to other server's databases, but rather it's crawling +rss.
Another thing that you can use is that when somebody make a query of an item which you don't have in qyour database, you make the query on the main bt portal's, cache the result in your db, and then display results. Then if another user make the same query (which is pretty common scenario) you can show him cached data + new data from rss.

Resources