How to avoid the Fabric CA beeing a single point of failure? - hyperledger

if I understood correctly, every peer in a fabric blockchain network (somehow interconnected through gossip) will only accept incoming connections from other peers if they use a HTTPS connection with a public key signed by the Fabric CA.
Is that correct?
So in my understanding, the Root-CA becomes the single point of failure because one could modify it and from then on modified Root-CA certificates will propagate to the nodes and eventually no node can connect to each other anymore.
Is this correct?

Let me try to answer the two questions also, perhaps a little more directly.
QUESTION1: if I understood correctly, every peer in a fabric blockchain network (somehow interconnected through gossip) will only accept incoming connections from other peers if they use a HTTPS connection with a public key signed by the Fabric CA. Is that correct?
ANSWER1: No, this is not correct. You said "the Fabric CA", but each fabric blockchain network has multiple trusted CAs where each may be a Fabric CA or another CA or a combination. There is no single trusted CA root in this model. Also, the connections from peers are over GRPC rather than HTTPS.
QUESTION2: So in my understanding, the Root-CA becomes the single point of failure because one could modify it and from then on modified Root-CA certificates will propagate to the nodes and eventually no node can connect to each other anymore. Is this correct?
ANSWER2: No, this is not correct.
There is no SPoF (Single Point of Failure) because:
a) a single Fabric CA can run in a cluster
b) there are multiple Fabric CA clusters (or other CAs) in a blockchain network.
c) the peers and orderers do not connect directly to a CA. They operate off of crypto material that is locally available from the file system or its copy of the ledger.
There is also no SPoT (Single Point of Trust) because:
a) their are multiple root CAs without a common root key, and
b) configuration updates which affect who trusts whom may require signatures from multiple identities from different roots of trust. For example, changing a trust policy could require signature from an administrator from every organization in the blockchain (or in hyperledger terminology, in the channel).

Peers will accept incoming connections from other peers and orderers. You define which members are going to take part in a channel, i.e. who are going to take part in a mini Blockchain inside your network. Then, you create the artifacts for each member. You have more information about the channels and the artifacts that you should create, here. And more info about the tool that you will use here.
Once you have created the channel and joined the peers to it, the connections are controled by the MSP. When you create a channel, you define de public key for each menber. Then, the MSP manages them.
As you said, the Root-CA could be modified, but that could happen in any other system with any other Root-CA. The Fabric CA Server should be switched on when the members are requesting their keys, then, it would be stopped. Also, Hyperledger recomends to create intermediate CAs.

The answers from varnit and Urko address the question in part. However, there are many facets to consider when determining whether the Fabric CA presents a SPoF. First, the Fabric CA can be made highly available as noted in the response from varnit. However, the Fabric CA is not required for operation of the blockchain network, it can be used by the SDK or by CLI to obtain certificates that are used to configure the peers and orderer(s) in the network and the channels over which transactions will be transacted. It is possible to create the certificates that you need when you configure the network without the Fabric CA entirely using the cryptogen tool. In the manual of the Fabric there is defined here. To configure the network you will use the configtxgen tool.
When configuring a network, the certificates representing each organization role are stored in the genesis block of the network, and when configuring a channel, in the channel's configuration block. Hence, each node, whether a peer or orderer, has access to all of the (root) certificates. The only way to change the root certificates of the various organizations would be to get a validated transaction to update the configuration of the network agreed per the endorsement policy defined for that network.

First of all, I would like to say that the question is very interesting secondly I think your concerns are true about Hyperledger Composer but a solution I would say that because the all the Hyperledger Fabric components are container based they can be easily scaled so in the case of Docker swarm I would just use `
docker service scale hyper ledger-ca 5`
and it will scale it to 5 containers or different nodes i hope that answers your question please let me know if there is anything left to answer

Related

Need help setting up a dev/test Corda Network with docker

I want to set up an environment where I have several VMs, representing several partners, and where each VM host one or more nodes. Ideally, I would use kubernetes to bring up/down my environment. I have understood from the docs that this has to be done as a Dev-network, not as my own compatibility zone or anything.
However, the steps to follow are not clear (to me). I have used Dockerform or the docker image provided, but this does not seem to be the way for what i need to do.
My current (it changes with the hours) understanding is that:
a) I should create a network between the vms that will be hosting nodes. To do so, i understand i should use Cordite or the Bootstrap jar. Cordite documentation seems clearer that the Corda docs, but i haven't been able to try it yet. Should one or the other be my first step? Can anyone shed some light on how?
b) Once I have my network created I need a certifying entity (Thanks #Chris_Chabot for pointing it out!)
c) The next step should be running deployNodes so I create the config files. Here, I am not sure of whether I can indicate in deployNodes at which IPs? should the nodes be created or I just need to create the dockerfiles and certificate folders and so on, and distribute across the VMs them accordingly. I am not sure either about how to point out to the Network service.
Personally, I guess that I will not use the Dockerfiles if I am going to use Kubernetes and that I only need to distribute the certificates and config files to all the slave VMs so they are available to the nodes when they are to be launched.
To be clear, and honest :D, this is even before including any cordapp in the containers, I am just trying to have the environment ready. Basically, starting a process that builds the nodes, distribute the config files among the slave vms, and runs the dockers with the nodes. As explained in a comment, the goal here is not testing Cordapps, is testing how to deploy an operative distributed dev environment.
ANY help is going to be ABSOLUTELY welcome.
Thanks!
(Developer Relations # R3 here)
A network of Corda nodes needs three things:
- A notary node, or a pool of multiple notary nodes
- A certification manager
- A network map service
The certification manager is the root of the trust in the network, and, well, manages certificates. These need to be distributed to the nodes to declare and prove their identity.
The nodes connect to the network map service, which checks their certificate to see if they have access to the network, and if so, add them to the list of nodes that it manages -- and distributes this list of node identities + ip addresses to all the nodes on that network.
Finally the nodes use the notaries to sign the transactions that happen on the network.
Generally we find that most people start developing on the https://testnet.corda.network/ network, and later deploy to the production corda.network.
One benefit of that is that this already comes with all these pieces (certification manager, network map, and a geographically distributed pool of notaries). The other benefit is that it guarantees that you have interoperability with other parties in the future, as everyone uses the same root certificate authority -- With your own network other 3rd parties couldn't just connect as they'd be on a different cert chain and couldn't be validated.
If however you have a strong reason to want to build your own network, you can use Cordite to provide the network map and certman services.
In that case step 1 is to go through the setup and configuration instructions on https://gitlab.com/cordite/network-map-service
Once that is fully setup and up and running, https://docs.corda.net/permissioning.html has some more information on how the certificates are setup, and the "Joining an existing Compatibility Zone" section in https://docs.corda.net/docker-image.html has instructions on how to get a Corda docker image / node to join that network by specifying which network map / certman url's to use.
Oh and on the IP network question: The network manager stores a combination of the X509 identity and the IP address for each node which it distributes to the network -- this means that every node, including the notaries, certman, network map and all nodes need to be able to connect to that IP address -- either by all being on the same network that you created, or by having public ip addresses

Re-using peers/nodes in Hyperledger Fabric for 2 different DLT networks

I was exploring the channels feature of Hyperledger Fabric. I understand that separate channels is only for same Blockchain (DLT) network. I am just exploring the possibility in Docker Containers, for building 2 PoCs.
If a node/peer is configured for an organisation in one DLT (multi-org) network, can it be re-used for a different organisation in another DLT (multi-org) network?
Eg. Lets say a peer is configured for a Supplier in one network (1st Use case), can it be re-used as a peer for Logistics Company in another network (2nd use case)?
Please note that both networks should be up and running at the same-time.
I haven't tried it practically but theoretically, it is possible. Why?
The criteria of a peer joining a channel to another network is dependent upon whether its root CA certificates are in that channel's configuration block or not, along with its MSP-ID.
And network itself is nothing but the information about the organization, that decided whether a peer can join a channel or not. If that information is in the configuration block, and it matches the peer, it is possible.
To my understanding, The network you keep referring to is actually a consortium of some organizations.
First things first, A peer is always associated with a particular organization and the crypto files and organization certificates are bootstrapped to the docker container of that peer.
However, The policies governing the network can be varied across the consortiums and/or channels in each consortium. The organization may have read/write access in one consortium and in others it may have separate permissions.
The peer cannot be decoupled from an organization but the organization itself can be decoupled from the consortium.

Can I have some keyspaces replicated to some nodes?

I am trying to build multiple API for which I want to store the data with Cassandra. I am designing it as if I would have multiple hosts but, the hosts I envisioned would be of two types: trusted and non-trusted.
Because of that I have certain data which I don't want to end up replicated on a group of the hosts but the rest of the data to be replicated everywhere.
I considered simply making a node for public data and one for protected data but that would require the trusted hosts to run two nodes and it would also complicate the way the API interacts with the data.
I am building it in a docker container also, I expect that there will be frequent node creation/destruction both trusted and not trusted.
I want to know if it is possible to use keyspaces in order to achieve my required replication strategy.
You could have two Datacenters one having your public data and the other the private data. You can configure keyspace replication to only replicate that data to one (or both) DCs. See the docs on replication for NetworkTopologyStrategy
However there are security concerns here since all the nodes need to be able to reach one another via the gossip protocol and also your client applications might need to contact both DCs for different reads and writes.
I would suggest you look into configuring security perhaps SSL for starters and then perhaps internal authentication. Note Kerberos is also supported but this might be too complex for what you need at least now.
You may also consider taking a look at the firewall docs to see what ports are used between nodes and from clients so you know which ones to lock down.
Finally as the above poster mentions, the destruction / creation of nodes too often is not good practice. Cassandra is designed to be able to grow / shrink your cluster while running, but it can be a costly operation as it involves not only streaming data from / to the node being removed / added but also other nodes shuffling around token ranges to rebalance.
You can run nodes in docker containers, however note you need to take care not to do things like several containers all accessing the same physical resources. Cassandra is quite sensitive to io latency for example, several containers sharing the same physical disk might render performance problems.
In short: no you can't.
All nodes in a cassandra cluster from a complete ring where your data will be distributed with your selected partitioner.
You can have multiple keyspaces and authentication and authorziation within cassandra and split your trusted and untrusted data into different keyspaces. Or you an go with two clusters for splitting your data.
From my experience you also should not try to create and destroy cassandra nodes as your usual daily business. Adding and removing nodes is costly and needs to be monitored as your cluster needs to maintain repliaction and so on. So it might be good to split cassandra clusters from your api nodes.

Is local system data use discouraged in hyperledger chaincode?

I wish to implement a contract that is subject to market data which each user has access to in their own LAN, but which they are not licensed to share over the internet. I understand that chaincode is supposed to be deterministic. Does this mean that it is not designed to tolerate referencing out-of-band data (data not available in the log or state) - so it would be hazardous in this protocol to reference this market data from chaincode?
Hyperledger Fabric (version 1.0) gives you the chance to create your own distributed networks, via channels. When you create a channel, you decide which are the participants of it, and you isolate them from the rest of the network. Then, you deploy, instantiate and invoke your chaincode, via that channel. So that, you don't share that chaincode and the transactions with all the people/network.
When you execute a transaction, you do it using some parameters. You would define it in your chaincode. You decide whether your chaincode could get parameters or not.
I have found many different questions in your question. Could you specify more wich is your issue?!

Consul-Agent architecture .. the node-id issue after upgrading to 0.8.1 - conceptual issue?

i am not sure where the root of my problem actually comes from, so i try to explain the bigger picture.
In short, the symptom: After upgrading consul from 0.7.3 to 0.8.1 my agents ( explaining that below ) could no longer connect to the cluster leader due to dublicated node-ids ( why that probably happens, explained below).
I could neither fix it with https://www.consul.io/docs/agent/options.html#_disable_host_node_id nor fully understand, why i run into this .. and thats where the bigger picture and maybe even different questions comes from.
I have the following setup:
I run a application stack with about 8 containers for different services ( different micoservices, DB-types and so on).
I use a single consul server per stack (yes the consul server runs in the software stack, it has its reasons because i need this to be offline-deployable and every stack lives for itself)
The consul-server does handle the registration, service discovery and also KV/configuration
Important/Questionable: Every container has a consul agent started with with "consul agent -config-dir /etc/consul.d" .. connecting the this one server. The configuration looks like this .. including to other files with they encrypt token / acl token. Do not wonder about servicename() .. it replaced by a m4 macro during image build time
The clients are secured by a gossip key and ACL keys
Important: All containers are on the same hardware node
Server configuration looks like this, if any important. In addition, ACLs looks like this, and a ACL-master and client token/gossip json files are in that configurtion folder
Sorry for this probably TLTR above, but the reasons behind all the explanation was, this multi-agent setup ( or 1-agent per container ).
My reasons for that:
I use tiller to configure the containers, so a dimploy gem will try to usually connect to localhost:8500 .. to acomplish that without making the consul-configuration extraordinary complicated, i use this local agent, which then forwards the request to the actual server and thus handles all the encryption-key/ACL negation stuff
i use several 'consul watch' tasks on the server to trigger re-configuration, they also run on localhost:8500 without any extra configuration
That said, the reason i run a 1-agent/container is, the simplicity for local services to talk to the consul-backend without really knowing about authentication as long as they connect through 127.0.0.1:8500 ( as the level of security )
Final Question:
Is that multi-consul agent actually designed to be used that way? The reason i ask is, because as far as i understand, the node-id duplication issue i get now when starting a 0.8.1 comes from "the host" being the same, so the hardware node being identical for all consul-agents .. right?
Is my design wrong or do i need to generate my own node-ids from now on and its all just fine?
Seem like this issue has been identified by Hashicorp and addressed in https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#085-june-27-2017 where -disable-host-node-id has been set to true by default, thus the node-id is no longer generated from the host hardware but a random uuid, which solves the issue i had running several consul nodes on the same physical hardware
So the way i deployed was fine.

Resources