SCSI 3 Persistent Reservation when working with MPIO - storage

We have 2 windows servers running on windows server 2012R2
we have a shared disk and a witness disk to implement a quorum behavior in the shared disk arbitration.
both quorum and data are currently configured with Fiber channel MPIO.
we do not provide the hardware so our customers work with various SAN vendors.
We are using the SCSI3 persistent reservation mechanism to make the disk arbitration, we are reserving the quorum witness disk from one machine and checking it from the other (passive) machine.
As part of the reservation flow each machine registers its unique SCSI registration key and uses it to perform the reservation when needed.
The issue occurs when MPIO is configured since in our current implementation (so it seems ) the key is registered on the device using the io path which is currently used to access the storage.
Once there is a failover/switch in IO path the reservation fails due to the fact that the key is not registered for that path.
Is there a way on the device/code level to have a SCSI reservation key be registered on all IO paths instead of just the specific path the registration command arrived on?
Thanks.

pr type need to be set as "Exclusive Access - Registrants Only". And all paths on active windows host must be registered for pr.
https://www.veritas.com/support/en_US/article.100016085.html
and https://www.veritas.com/support/en_US/article.100018257.html may help.

Related

Specific processes sharing memory via mmap()

My question is simple How do I share memory among processes allowing reads and writes of memory .The main thing is only specific processes(like specific PID's for example) should have the ability to share that memory .Not all processes should have the ability to share the memory.
One option is to use the standard Sys V IPC shared memory. After call to shmget(), use shmctl() to set the permissions. Give read write permission to only one group/user and start the processes which are allowed to access this as the specific user. The shared memory key and IDs can be found using ipcs, and you need trust the standard unix user/group based security to do the job.
Another option is implementing a shared memory driver. Something similar to Android ashmem. In the driver, you can validate the PID/UID of the caller who is trying to access the memory and allow/deny the request based on filters. You can also implement a sysfs entry to modify these filter. If the filters needs to be configurable, again you need to trust the Unix user/group based security. If you are implementing a driver, you will have plenty of security options

Reverse engineering a Docker deployment on private cloud

I am working on a software that has to be deployed on private cloud of a client. The client has root access, as well as hardware. I don't want the client to reverse engineer our software.
We can control two things here:
we have access to a secure port of the server, which we can use to send tokens to decrypt the code, and shut it down if necessary;
we can do manual installation (key in a password at the time of installation) or use Tamper resistance device if we have to.
Can a Docker deployment prevent our client from reverse engineering our code? We plan to open a single port and use SSL to protect incoming and outgoing data.
If user has root, or he able to use his custom kernel (or even kernel modules), he can do anything - dump memory, stop process, attach debugger - to start reverse engineering. If user has access to hardware, he also can get root or custom kernel. The only way to protect soft from user - is using good DRM, for example with help of TPM (Trusted Platform Module), or ARM TrustZone. SecureBoot will not fully protect your soft (on x86 it usually may be turned off). Other variant is using Tamper-resistant hardware (http://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-tamper-resistant-hardware.htm), like what is used to store master encryption keys (to process pin-codes) in banks (http://en.wikipedia.org/wiki/Hardware_security_module), but this hardware have very high cost.
It is known that Docker does not give protection to the code from user:
https://stackoverflow.com/a/26108342/196561 -
The root user on the host machine (where the docker daemon runs) has full access to all the processes running on the host. That means the person who controls the host machine can always get access to the RAM of the application as well as the file system. That makes it impossible to hide a key for decrypting the file system or protecting RAM from debugging.
Any user capable of deploying docker container (user from docker group) has full access to the container fs, has root access to the container processes and can debug them and dump their memory.
https://www.andreas-jung.com/contents/on-docker-security-docker-group-considered-harmful
Only trusted users should be allowed to control your Docker daemon
http://docs.docker.com/articles/security/#docker-daemon-attack-surface
Docker allows you to share a directory between the Docker host and a guest container; and it allows you to do so without limiting the access rights of the container.
So, Docker give no additional protection to your code from user; we can consider it just like other packaging system, like rpm and deb. Rpm and deb allows you to pack your code into single file and list dependencies, and docker packs your code and dependencies into single file.
Our solution is hosted on our client's cloud server, so they do have access to both root and the hardware. However, we have two advantages here: 1) we have access to a secure port, which we can use to send tokens to decrypt the code, and audit suspicious activities; 2) we can do manual installation (key in a token at the time of installation)
You can protect only the code you own, if it is running on the hardware you own (turn off all NSA/IntelME/IPMI/UEFI backdoors to own hardware). If user runs your code on his hardware, he will have all binaries and will be capable of memory dumping (after receiving the token from you).
Virtualization on his hardware will not give your code any additional protection.
Does "secure port" means SSL/TLS/SSH? It is secure only to protect data when it is send on network; both endpoints will have the data in plain, unencrypted form.
Manual installation will not help to protect code after you leave user's datacenter.
I think you can buy some usual software protection solution, like flexlm, may be with some hardware tokens required to run the software. But any protection may be cracked, early (cheaper) will be cracked easier, and modern (more expensive) protection is bit harder to crack.
You may also run some part of software on your own servers; this part will be not cracked.
or use Tamper resistance hardware if we have to.
You can't use tamper resistance hardware if there is no such hardware in the user's server. And it is very expensive.

can we connect storage server to application server as an external hard disk

I am new to storage domain .can Some one please help me in understanding the below things
Can a storage sever be connected to Application server?
1.How storage servers are different from applications servers
2.Can multiple application servers connect to storage serves over the network
3.what kind of files will be served by NAS and SAN severs
Firstly this question belongs on server-fault stack exchange still it is a good conceptual question...
So the answers are~~
Yes storage servers can connect to application server (app servers are in fact software frameworks or specific portion of a server program implementation). Application servers communicate with storage server to store / retrieve / process data.
Apart from high disk space, what else is different about storage servers you may ask ? In many cases, they come with a host of specialized services. This can include storage management software, extra hardware for higher resilience, a range of RAID (redundant array of independent disks) configurations and extra network connections to enable more users to be desktops to be connected to it.
Where as, application server is a software program that handles all application operations between users and an organization's backend business applications or databases. An application server is typically used for complex transaction-based applications. To support high-end needs, an application server has to have built-in redundancy, monitor for high-availability, high-performance distributed application services and support for complex database access. For mobile computing, mobile app server is mobile middleware that makes back-end systems accessible to mobile applications to support Mobile application development. Frankly speaking, application servers lie in the territory between database servers and the end user, and they often connect the two.
Multiple application servers CAN and in reality DOES connect to storage serves over the network or even directly. but for concurrent access to data there must be guaranteed reliability of data between transactions. Something like ACID properties.
Cming to the third one, NAS, it turns out, is NOT really storage networking. Actual network-attached storage would be storage attached to a storage-area network (SAN). NAS, on the other hand, is just a specialized server attached to a local-area network. All it does is make its files available to users and applications connected to that NAS box — much the same as a storage server. To further conceptualize the difference between a NAS and a SAN, NAS appears to the client OS (operating system) as a file server (the client can map network drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client's local disks), and available to be formatted with a file system and mounted.

Integrate in Zenoss remote data

I need to monitor several Linux servers placed in a different location from my farm.
I have VPN connection to this remote location.
Internally I use Zenoss 4 to monitor the systems, I would like to use Zenoss to monitor remote systems too. For contract policy, I cannot use VPN connection for Zenoss data (e.g. SNMP or SSH).
What I created is a bunch of scripts that fetch desired data from remote systems to an internal server. The format of the returned data is one CVS per every location, containing data from all appliances placed in that location.
For example:
$ cat LOCATION_1/current/current.csv
APPLIANCE1,out_of_memory,no,=,no,3,-
APPLIANCE1,postgre_idle,no,=,no,3,-
APPLIANCE2,out_of_memory,no,=,no,3,-
APPLIANCE2,postgre_idle,no,=,no,3,-
The format of CVS is this one:
HOSTNAME,CHECK_NAME,RESULT_VALUE,COMPARE,DESIRED_VALUE,INFO
How can i integrate those data in Zenoss, as the machines were placed in the internal farm?
If it is necessary, I could eventually change the format of fetched data.
Thank you very much
One possibility is for your internal server that communicates with remote systems (let's call it INTERNAL1) to re-issue the events as SNMP traps (or write them to the rsyslog file) and then process them in Zenoss.
For example, the message can start with the name of the server: "[APPLIANCE1] Out of Memory". In the "Event Class transform" section of your Zenoss web interface (http://my_zenoss_install.local:8080/zport/dmd/Events/editEventClassTransform), you can transform attributes of incoming messages (using Python). I frequently use this to lower the severity of an event. E.g.,
if evt.component == 'abrt' and evt.message.find('Saved core dump of pid') != -1:
evt.severity = 2 # was originally 3, I think
For your needs, you can set the evt.device to APPLIANCE1 if the message comes from INTERNAL1, and contains [APPLIANCE1] tag as the message prefix, or anything else you want to use to uniquely identify messages/traps from remote systems.
I don't claim this to be the best way of achieving your goal. My knowledge of Zenoss is strictly limited to what I currently need to use it for.
P.S. here is a rather old document from Zenoss about using event transforms. Unfortunately documentation in Zenoss is sparse and scattered (as you may have already learned), so searching old posts and/or asking questions on the Zenoss forum may be necessary.
Simply you can deploy one collector in remote location, and you add that host into collector pool , you can monitor remote linux servers also

LUN was mapped incorrectly

We have a blade server booting from SAN where we attempt to image. After the image got applied successfully the server failed to boot to the OS. We escalate the issue to the storage team and found out the root cause was "LUN was mapped incorrectly" however not much more detail was given regarding the root cause and resolution. We do not have much knowledge on SAN. Could someone help to explain what is the most probably cause for "LUN was mapped incorrectly" when server failed to boot to OS after image got applied and how the issue is resolved?
First off - LUN is 'logical unit' - it's essentially a disk as provided from a storage array. Topology and geometry are hidden behind the scenes, and generally shouldn't be visible to the host.
LUN mapping is the process where a LUN as created on a storage array is presented across the SAN to a designated host - or set of hosts. Part of this involves setting a LUN ID (Although many storage arrays do this automatically) and this LUN id is how it 'appears' to the host. The convention for SCSI connectivity is that a LUN is identifiable by a compound of controller, target, LUN id. (After which the host can partition the LUN, although it probably shouldn't on most SAN storage configurations).
Controller being the card in the host, target being the storage array, and LUN being that number that the storage array has configured.
Many implementations of SCSI check to see if a LUN 0 exists first, and if it doesn't, doesn't bother to continue scanning the SCSI bus - as searching large number of LUNs and getting timeouts because it's not connected can take a lot of time.
Your boot device will be 'known' to the host as a particular combination of controller, target, lun (and partition). incorrect mapping means that - probably - this boot LUN was on the wrong LUN id, thus your host couldn't find it to boot from.

Resources