ISCSI multiple connections using the same initiator IQN - target

Is it possible for multiple computers to connect to the same target at the same time, using the same initiator IQN?
Thank you.

It's not a good idea. From RFC 3720 (http://www.ietf.org/rfc/rfc3720.txt):
a) iSCSI names are globally unique. No two initiators or targets
can have the same name.
You may find that duplicate IQN's may work on a given target. But even then, you must ensure that the iSCSI session ID's can not overlap, or the target may get very confused. The combination of IQN, iSCSI session ID and target form the I_T nexus that the target will use to keep track of things. Again, from RFC 3720:
c) I_T nexus - a relationship between a SCSI Initiator Port and a
SCSI Target Port, according to [SAM2]. For iSCSI, this
relationship is a session, defined as a relationship between
an iSCSI Initiator's end of the session (SCSI Initiator Port)
and the iSCSI Target's Portal Group. The I_T nexus can be
identified by the conjunction of the SCSI port names or by the
iSCSI session identifier SSID. iSCSI defines the I_T nexus
identifier to be the tuple (iSCSI Initiator Name + 'i' + ISID,
iSCSI Target Name + 't' + Portal Group Tag).
This nexus object must be unique.

Possible - yes. Good idea - no. For this to work correctly you would need a cluster filesystem; otherwise you'll end up with corrupted data.

Related

Need help setting up a dev/test Corda Network with docker

I want to set up an environment where I have several VMs, representing several partners, and where each VM host one or more nodes. Ideally, I would use kubernetes to bring up/down my environment. I have understood from the docs that this has to be done as a Dev-network, not as my own compatibility zone or anything.
However, the steps to follow are not clear (to me). I have used Dockerform or the docker image provided, but this does not seem to be the way for what i need to do.
My current (it changes with the hours) understanding is that:
a) I should create a network between the vms that will be hosting nodes. To do so, i understand i should use Cordite or the Bootstrap jar. Cordite documentation seems clearer that the Corda docs, but i haven't been able to try it yet. Should one or the other be my first step? Can anyone shed some light on how?
b) Once I have my network created I need a certifying entity (Thanks #Chris_Chabot for pointing it out!)
c) The next step should be running deployNodes so I create the config files. Here, I am not sure of whether I can indicate in deployNodes at which IPs? should the nodes be created or I just need to create the dockerfiles and certificate folders and so on, and distribute across the VMs them accordingly. I am not sure either about how to point out to the Network service.
Personally, I guess that I will not use the Dockerfiles if I am going to use Kubernetes and that I only need to distribute the certificates and config files to all the slave VMs so they are available to the nodes when they are to be launched.
To be clear, and honest :D, this is even before including any cordapp in the containers, I am just trying to have the environment ready. Basically, starting a process that builds the nodes, distribute the config files among the slave vms, and runs the dockers with the nodes. As explained in a comment, the goal here is not testing Cordapps, is testing how to deploy an operative distributed dev environment.
ANY help is going to be ABSOLUTELY welcome.
Thanks!
(Developer Relations # R3 here)
A network of Corda nodes needs three things:
- A notary node, or a pool of multiple notary nodes
- A certification manager
- A network map service
The certification manager is the root of the trust in the network, and, well, manages certificates. These need to be distributed to the nodes to declare and prove their identity.
The nodes connect to the network map service, which checks their certificate to see if they have access to the network, and if so, add them to the list of nodes that it manages -- and distributes this list of node identities + ip addresses to all the nodes on that network.
Finally the nodes use the notaries to sign the transactions that happen on the network.
Generally we find that most people start developing on the https://testnet.corda.network/ network, and later deploy to the production corda.network.
One benefit of that is that this already comes with all these pieces (certification manager, network map, and a geographically distributed pool of notaries). The other benefit is that it guarantees that you have interoperability with other parties in the future, as everyone uses the same root certificate authority -- With your own network other 3rd parties couldn't just connect as they'd be on a different cert chain and couldn't be validated.
If however you have a strong reason to want to build your own network, you can use Cordite to provide the network map and certman services.
In that case step 1 is to go through the setup and configuration instructions on https://gitlab.com/cordite/network-map-service
Once that is fully setup and up and running, https://docs.corda.net/permissioning.html has some more information on how the certificates are setup, and the "Joining an existing Compatibility Zone" section in https://docs.corda.net/docker-image.html has instructions on how to get a Corda docker image / node to join that network by specifying which network map / certman url's to use.
Oh and on the IP network question: The network manager stores a combination of the X509 identity and the IP address for each node which it distributes to the network -- this means that every node, including the notaries, certman, network map and all nodes need to be able to connect to that IP address -- either by all being on the same network that you created, or by having public ip addresses

Re-using peers/nodes in Hyperledger Fabric for 2 different DLT networks

I was exploring the channels feature of Hyperledger Fabric. I understand that separate channels is only for same Blockchain (DLT) network. I am just exploring the possibility in Docker Containers, for building 2 PoCs.
If a node/peer is configured for an organisation in one DLT (multi-org) network, can it be re-used for a different organisation in another DLT (multi-org) network?
Eg. Lets say a peer is configured for a Supplier in one network (1st Use case), can it be re-used as a peer for Logistics Company in another network (2nd use case)?
Please note that both networks should be up and running at the same-time.
I haven't tried it practically but theoretically, it is possible. Why?
The criteria of a peer joining a channel to another network is dependent upon whether its root CA certificates are in that channel's configuration block or not, along with its MSP-ID.
And network itself is nothing but the information about the organization, that decided whether a peer can join a channel or not. If that information is in the configuration block, and it matches the peer, it is possible.
To my understanding, The network you keep referring to is actually a consortium of some organizations.
First things first, A peer is always associated with a particular organization and the crypto files and organization certificates are bootstrapped to the docker container of that peer.
However, The policies governing the network can be varied across the consortiums and/or channels in each consortium. The organization may have read/write access in one consortium and in others it may have separate permissions.
The peer cannot be decoupled from an organization but the organization itself can be decoupled from the consortium.

SCSI 3 Persistent Reservation when working with MPIO

We have 2 windows servers running on windows server 2012R2
we have a shared disk and a witness disk to implement a quorum behavior in the shared disk arbitration.
both quorum and data are currently configured with Fiber channel MPIO.
we do not provide the hardware so our customers work with various SAN vendors.
We are using the SCSI3 persistent reservation mechanism to make the disk arbitration, we are reserving the quorum witness disk from one machine and checking it from the other (passive) machine.
As part of the reservation flow each machine registers its unique SCSI registration key and uses it to perform the reservation when needed.
The issue occurs when MPIO is configured since in our current implementation (so it seems ) the key is registered on the device using the io path which is currently used to access the storage.
Once there is a failover/switch in IO path the reservation fails due to the fact that the key is not registered for that path.
Is there a way on the device/code level to have a SCSI reservation key be registered on all IO paths instead of just the specific path the registration command arrived on?
Thanks.
pr type need to be set as "Exclusive Access - Registrants Only". And all paths on active windows host must be registered for pr.
https://www.veritas.com/support/en_US/article.100016085.html
and https://www.veritas.com/support/en_US/article.100018257.html may help.

How to avoid the Fabric CA beeing a single point of failure?

if I understood correctly, every peer in a fabric blockchain network (somehow interconnected through gossip) will only accept incoming connections from other peers if they use a HTTPS connection with a public key signed by the Fabric CA.
Is that correct?
So in my understanding, the Root-CA becomes the single point of failure because one could modify it and from then on modified Root-CA certificates will propagate to the nodes and eventually no node can connect to each other anymore.
Is this correct?
Let me try to answer the two questions also, perhaps a little more directly.
QUESTION1: if I understood correctly, every peer in a fabric blockchain network (somehow interconnected through gossip) will only accept incoming connections from other peers if they use a HTTPS connection with a public key signed by the Fabric CA. Is that correct?
ANSWER1: No, this is not correct. You said "the Fabric CA", but each fabric blockchain network has multiple trusted CAs where each may be a Fabric CA or another CA or a combination. There is no single trusted CA root in this model. Also, the connections from peers are over GRPC rather than HTTPS.
QUESTION2: So in my understanding, the Root-CA becomes the single point of failure because one could modify it and from then on modified Root-CA certificates will propagate to the nodes and eventually no node can connect to each other anymore. Is this correct?
ANSWER2: No, this is not correct.
There is no SPoF (Single Point of Failure) because:
a) a single Fabric CA can run in a cluster
b) there are multiple Fabric CA clusters (or other CAs) in a blockchain network.
c) the peers and orderers do not connect directly to a CA. They operate off of crypto material that is locally available from the file system or its copy of the ledger.
There is also no SPoT (Single Point of Trust) because:
a) their are multiple root CAs without a common root key, and
b) configuration updates which affect who trusts whom may require signatures from multiple identities from different roots of trust. For example, changing a trust policy could require signature from an administrator from every organization in the blockchain (or in hyperledger terminology, in the channel).
Peers will accept incoming connections from other peers and orderers. You define which members are going to take part in a channel, i.e. who are going to take part in a mini Blockchain inside your network. Then, you create the artifacts for each member. You have more information about the channels and the artifacts that you should create, here. And more info about the tool that you will use here.
Once you have created the channel and joined the peers to it, the connections are controled by the MSP. When you create a channel, you define de public key for each menber. Then, the MSP manages them.
As you said, the Root-CA could be modified, but that could happen in any other system with any other Root-CA. The Fabric CA Server should be switched on when the members are requesting their keys, then, it would be stopped. Also, Hyperledger recomends to create intermediate CAs.
The answers from varnit and Urko address the question in part. However, there are many facets to consider when determining whether the Fabric CA presents a SPoF. First, the Fabric CA can be made highly available as noted in the response from varnit. However, the Fabric CA is not required for operation of the blockchain network, it can be used by the SDK or by CLI to obtain certificates that are used to configure the peers and orderer(s) in the network and the channels over which transactions will be transacted. It is possible to create the certificates that you need when you configure the network without the Fabric CA entirely using the cryptogen tool. In the manual of the Fabric there is defined here. To configure the network you will use the configtxgen tool.
When configuring a network, the certificates representing each organization role are stored in the genesis block of the network, and when configuring a channel, in the channel's configuration block. Hence, each node, whether a peer or orderer, has access to all of the (root) certificates. The only way to change the root certificates of the various organizations would be to get a validated transaction to update the configuration of the network agreed per the endorsement policy defined for that network.
First of all, I would like to say that the question is very interesting secondly I think your concerns are true about Hyperledger Composer but a solution I would say that because the all the Hyperledger Fabric components are container based they can be easily scaled so in the case of Docker swarm I would just use `
docker service scale hyper ledger-ca 5`
and it will scale it to 5 containers or different nodes i hope that answers your question please let me know if there is anything left to answer

LUN was mapped incorrectly

We have a blade server booting from SAN where we attempt to image. After the image got applied successfully the server failed to boot to the OS. We escalate the issue to the storage team and found out the root cause was "LUN was mapped incorrectly" however not much more detail was given regarding the root cause and resolution. We do not have much knowledge on SAN. Could someone help to explain what is the most probably cause for "LUN was mapped incorrectly" when server failed to boot to OS after image got applied and how the issue is resolved?
First off - LUN is 'logical unit' - it's essentially a disk as provided from a storage array. Topology and geometry are hidden behind the scenes, and generally shouldn't be visible to the host.
LUN mapping is the process where a LUN as created on a storage array is presented across the SAN to a designated host - or set of hosts. Part of this involves setting a LUN ID (Although many storage arrays do this automatically) and this LUN id is how it 'appears' to the host. The convention for SCSI connectivity is that a LUN is identifiable by a compound of controller, target, LUN id. (After which the host can partition the LUN, although it probably shouldn't on most SAN storage configurations).
Controller being the card in the host, target being the storage array, and LUN being that number that the storage array has configured.
Many implementations of SCSI check to see if a LUN 0 exists first, and if it doesn't, doesn't bother to continue scanning the SCSI bus - as searching large number of LUNs and getting timeouts because it's not connected can take a lot of time.
Your boot device will be 'known' to the host as a particular combination of controller, target, lun (and partition). incorrect mapping means that - probably - this boot LUN was on the wrong LUN id, thus your host couldn't find it to boot from.

Resources