Setup Windows Storage Server as SAN Storage - storage

we want to setup a Windows Storage Server 2016 with FibreChannel and 2 Shelfs as SAN Storage.
We need this to connect the Blades at the Storage.
Blades all have installed FC HBAs, but Windows only provide iSCSI Initiators for LUN but not for FibreChannel.
Is there a solution, or should we use an other software as filer (FreeNas, Nexenta, OpenFiler)?

Windows doesn't do anything like that, and unfortunately first choice for target-side storage stack StarWind doesn't do any FC as well. There are other Windows options, but they are very very expensive! What you can do is... use FreeBSD virtual machine with something like scst inside.
https://forums.freebsd.org/threads/46591/
https://gathering.tweakers.net/forum/list_messages/1613088
Pass-thru your FC HBA with a PCI pass-thru or SR-IOV and you'll be good!

Related

About NAS and SAN(protocols, architecture, etc..)

I am currently kind of having trouble to understand between NAS and SAN.
As far as I figured out, NAS and SAN are kind of defined as below.
NAS(Network attached storage)
- Usually used as file storage and use Ethernet Infrastructure to communicate
- As file storage, support protocols like NFS, CIFS, SMB, HTTP(S)
SAN(Storage Area Network)
- Network Protocol to communicate with block storage for data access.
- Configured with separated network system
- Commonly based on Fibre Channel(FC) technology.
- Could use iSCSI(in small and medium sized business) or FCoE for less expensive alternative to FC
So, below is my questions.
1. Is File Storage and Block Storage are the solutions? I researched and found that NAS is File Storage Solution and SAN Storage is Block Storage Solution.
- In that case, are their base infrastructure(storage device) same? Only different with protocols, network devices, may be storage os something that controls underline device and way of usage?
2. I found there are NAS Solutions that support iSCSI. But I found that iSCSI is SCSI Protocol that use TCP/IP Network system and SCSI is for block level storage communication protocols.
- And Now I am confused. NAS is a file storage solution and how could that support iSCSI Protocol?
3. Are AWS root disk and EBS storage SAN Storage?
- I read that SAN Storage configuration could be expensive so iSCSI or FCoE are less expensive way to configure.
- With what technology AWS storage Infrastructure is configured??
I am kind of newly studying of these storage part computer science and got some questions.
Is there anyone can explain those questions clearly?
Thank you.
It depends on what you call a "Solution". The basic infrastructure is the same it's a some kind of a "storage server" (storage system) with physical disk(s), but it very much dependent of technologies, vendors and various options. Typically, a storage system provides access to its physical disks with different protocols of 2 main groups: block-level protocols like SCSI or rarely ATA on one hand, or file-level protocols like NFS, CIFS, etc on the other. It doesn't mean, a storage system can't work in both, block and file modes.
Storage network - SAN can be build over FC, FCoE, converged infrastructure, pure TCP/IP for iSCSI, Infiniband or any other infrastructure. Typically, when people say "SAN" they mean Block storage devices and FC protocol, but it doesn't mean, that a file storage - NAS can't be connected with SAN and vice verse.

Is there a traditional virtualization other than docker that uses the underlying disk storage without pre-setting a limit?

I know docker can fully use the underlying disk storage without pre-setting a limit. Is there a traditional virtualization technique I've missed such as Xen, KVM, virtualbox etc that can do the same?
Requirement:
1. open source
2. i can configure virtual IPs like kvm, xen etc.
3. can use underlying disk storage like docker (no need pre-set a limit so it can grow to fill all disk space for each and multiple instances)
by using VirtualBox you should be able to get everything required:
It's opensource
You can configure as Virtual-IPs as you want (see https://blogs.oracle.com/scoter/networking-in-virtualbox-v2 )
You can create vdisks (pre-allocated or dynamic), expand them and also use shared-folders (shared between host and guest)
Simon

Is it possible to use EFI to create fully cross-platform disk driver?

I need to create a driver, which will behave similar to software RAID. E.g. driver will need to communicate to multiple physical disks (or maybe even network resources), and shall look like a disk to the OS.
So two main questions are:
1) Are EFI drivers recognized and supported by Windows, MacOS X and Linux? E.g. can these systems use EFI disk drivers, and ?
2) Is it possible in theory to write such a driver for EFI? My primary concern is possibility of accessing other EFI disk drivers from your own virtual disk driver.
I only have time for a quick-ish reply, forgive me for brevity!
Are EFI drivers recognized and supported by Windows, MacOS X and Linux? E.g. can these systems use EFI disk drivers, and ?
To my knowledge, only the bootloaders for these OSs use the UEFI driver stack for loading the native OS kernel and drivers. Once ExitBootServices() is called by the bootloader, most of the drivers are unloaded, and (according to the spec) no calls to handle-based drivers may happen after this, meaning no disk drivers. Like a traditional bootloader, a UEFI bootloader is only using the basic drivers long enough to load the OS's native drivers. You can also use these drivers in the preboot environment if you'd like (although it sounds like you don't!).
TL;DR No, these systems can't use UEFI drivers other than for loading the OS.
Is it possible in theory to write such a driver for EFI?
You should definitely be able to layer your UEFI driver on top of the existing stack. It might be a little tricky if you haven't worked with UEFI before, but conceptually the system is very modular. There appear to be a number of resources on the Internet to help you out, and there is always Beyond BIOS: Developing with the Unified Extensible Firmware Interface by Vincent Zimmer.
As far as testing goes, you can use one of the simulators provided in Intel's EDKII (if you're on Windows, you should probably use the Nt32 project, it works well with Visual Studio).
TL;DR Yes, you can write this driver, but it will only work for bootloaders and applications in the pre-boot environment.
1) Are EFI drivers recognized and supported by Windows, MacOS X and Linux? E.g. can these systems use EFI disk drivers, and ?
No. Once you run ExitBootServices() it goes bubye. You can have a RunTimeDXE Driver, but those are incredibly limited. They cannot allocate memory (the VM Map is controlled by the OS) and they don't have access to any EFI APIs any more. They can be used to transfer information from the Firmware to the OS, but a better choice would be a private EFI Table.
2) Is it possible in theory to write such a driver for EFI? My primary concern is possibility of accessing other EFI disk drivers from your own virtual disk driver.
The bootloader is the only one able to use the EFI drivers. If you want to go to the OS level, you need to write your own OS driver that gets the information from the EFI driver using EFI System Tables. Such examples are Full Disk Encryption implementations.
In theory you would need to write an EFI Block Driver based, a Windows IO Filter or BUS/Volume Driver, an OS X IOStorageFamily KEXT and a Linux Block Device Driver all of them transferring the information from firmware to OS using EFI Tables, Variables or RuntimeDXE.
RuntimeDXE implementations are incredibly difficult due to the conversion to Virtual Memory Maps from the flat addressing available in the EFI.

Advise clustering file system for storage for the array through the Fibre Channel

I deploy Openstack for private cloud. Faced a problem of a choice of file system for storage. That live migration between physical servers worked.
Configuration:
HP P2000 FC disk massif and four computing notes which are connected on through HBA FiberChanel to one general lun on storage.
Advise clustering file system without use of iscsi, fcoe... etc... Only FC. As VMFS from Vmware...
Thx!
I can tell you about OpenStack Object Storage aka Swift, you can use fibre channel across zones/regions/geo-cluster for data transfer.
Hope it helps.
There are no good cluster filesystem alternatives to VMware VMFS on Linux. You may look at Oracle OCFS2, Red Hat GFS2, SGI CXFS and Symantec VxFS. All of them are dated. Newer generation of filesystems moved to distributed architecture over local drives (as compared to shared SAN) for scalability.
I think you need object file system Ceph, Lustre, NetApp object storage etc.
But you can create san cluster. I dont know MSA maybe not supported but 3Par have HP peer persistence.

What is the most suitable virtual machine software for sharing hardware ports (COM, LPT etc) at register level?

I'm using Delphi to develop real-time control software and over the last couple of years I have done some work running older Windows installations under Microsoft's VirtualPC and it works fine for 'pure software' development (i.e no or limited access to the outside world). Such tools seem able to work with network connections but I have to maintain software which performs I/O via the parallel port (via a device driver). We also use USB I/O. In the past I've liked Microsoft's virtual tools because it takes time to install a new operating system and then (in my case) install Delphi and a load of libraries and components to provide development support. In these circumstances I've not been too bothered by my lack of access to the low-level I/O ports.
I want to up my game and I'm happy to pay for a good virtualisation tool IF I can have access from it to the outside world, i.e I want to be able to configure it to allow access to my machine's parallel port and com ports in the same way as if it was running natively. This access has to be able to expose the parallel port in register terms, i.e to 'see' the port at address $03f8 for example and to support I/O operations of those registers (via the appropriate kernel access) as my Windows 7 64-bit installation is able to do.
I see that there are a number of virtualisation solution out there now but it's quite hard to acertain the capability of each at such a low level. Does anyone have any experience or knowledge in this area?
The VMware products would be suited best for this. You can add virtual serial and parallel ports and forward them to a physical port on the host, or even to a file or a named pipe.
You can also connect any USB device that is connected to the host machine.
This works with VMware Workstation, but might even work with the free VMware player too.

Resources