Are Read/Write operations allowed in RAID when a drive fails? - hard-drive

RAID 0/1/5 in question. I am trying to determine whether or not it is possible to continue reading or writing if ONE hard drive in the array fails.
For RAID 0 I assumed the entire array fails if one goes down (but is
it still possible to read from the existing?
For RAID 1 I assumed reading and writing continues as normal since
there is a mirror copy.
For Raid 5 I assumed that reading would continue as normal (just the information would have to potentially be reconstructed) and write should cease until a new drive is inserted
I am pretty sure that my assumptions for RAID 1 are correct, but I am uncertain on the behavior of RAID 0 and 5. Ideally you wouldn't touch RAID 5 until you could rebuild the new drive, but is it possible? Any resources on the subject would be awesome.

RAID 0: If one disk fails, the entire array fails (i.e. no longer can read from the existing).
RAID 1: If one disk fails, the array can continued to be read and written. After a new disk has been replaced, reconstruction of the RAID can occur online but performance would be impacted.
RAID 5: If one disk fails, the array can continued to be read and written. After a new disk has been replaced, reconstruction of the RAID can occur online but performance would be impacted.

Related

Flash memory raw data changes depending on the reading tool. Why?

I've been playing around with the raw data inside an 8GB Memory Stick, reading and writing directly into specific sectors, but for some reason changes don't remain consistent.
I've used Active # Disk Editor to write a string at a specific sector and it seems consistent when I read it through Active (it survives unmounting, rebooting...), but if I try to read it through terminal using dd and hexdump the outcome is different.
Some time ago I was researching ways to fully and effectively erase a disk and I read somewhere that solid state drives such as flash drives or SSDs have more memory than it's stated so its internals keep replacing parts of the memory in order to increase lifespan or something like that.
I don't know if it is because of that or if it's even correct. Could you tell me if I'm wrong or where to find good documentation about the subject?
Okay I just figured it out.
Apparently when you open a disk inside a Hex editor there's two ways you can go, you can open it as a physical disk (the whole disk) or as a logical disk, aka a volume or a partition.
Active # Disk Editor was opening it as a physical disk, while using dd and hexdump was dumping it as a logical disk. In other words it was dumping the content of the only partition inside the physical disk. This means there was an offset between the real physical sectors where I was writing data using Active and the ones that I was reading (quite a big offset, 2048 sectors of 512 bytes each).
So changes were being made, I was just looking at the wrong positions. Hope this saves someone a few minutes.

How can a volume exist on more than one physical disk?

Here they say that calling DeviceIoControl with IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS control code, "retrieves the physical location of a specified volume on one or more disks." But from my 25 years of using compuers I know that a physical disk can have one or more volumes, not the other way around. I can't even imagine how a volume can exist on multiple physical disks. So, the question is, which are the cases when a volume exists on multiple disks?
Spanned Volume
A spanned volume combines areas of unallocated space from multiple disks into one logical volume, allowing you to more efficiently use all of the space and all the drive letters on a multiple-disk system.
Though it's only supported on dynamic disks
The following operations can be performed only on dynamic disks:
...
Extend a simple or spanned volume.
https://learn.microsoft.com/en-us/windows/win32/fileio/basic-and-dynamic-disks explains what you must at least have heard of in your 25 years: (software) RAIDs. A RAID 0 is basically your solution to the problem "no disk exists that is large enough for my needs".

Opening millions of numpy.memmaps in python

I have a database composed of millions of training examples. Each is saved as its own numpy.memmap. (Yes, yes, I know, but they're of irregular sizes. I probably will modify my design to put like-size examples together in one memmap and hide that fact from the user.)
Trying to open this database causes me to run in to the system NOFILES limits, but I've solved that part.
Now I'm running in to OSError: [Errno 12] Cannot allocate memory after about 64865 memmaps are created, and executing most other code after that point results in a MemoryError. This is strange because the process takes only 1.1 GiB of memory before it fails, and this server has nearly 100 GiB of memory.
I've gone and saved a million dummy files in a directory and opened them all with python's standard open function, and it works fine. Takes about 5 GiB of memory between file handles and contents, but it works.
What could be limiting me to only opening about 2^16 memory maps?

Use log4j 2 for writing to data files or database table

I used log4j (v. 1) in the past and was glad to know that a major refactoring was done to the project, resulting in log4j 2, which solves the issues that plagued version 1.
I was wondering if I could use log4j 2 to write to data files, not only log files.
The application I will be soon developing will need to be able to receive many events from different sources and write them very fast either to a data file or to a database (I haven't decided which yet).
The thread that receives the events must not be blocked by I/O while attempting to write events, so log4j2's Asynchronous Loggers, based on the LMAX Disruptor library, will definitely fit this scenario.
Moreover, my application must be able to recover either from a 'not enough space on disk' or 'unable to reach database' conditions, when writing to a data file or to a database table, respectively. In other words, when the application runs out of disk space or the database is temporarily unavailable, my application needs to store events in memory and wait for storage to become available and when it does, write all waiting events to disk or database.
Do you think I can do this with log4j?
Many thanks for your help.
Regards,
Nuno Guerreiro
Yes.
I'm aware of at least one production implementation in a similar scenario, where in gathered events are written to disk at high throughput.
Write to a volume other than your system volume to minimize the chances of system crashes due to disk space overrun.
Upfront capacity planning can help in ensuring h/w configuration with adequate resources to handle projected average load and bursts, for a reasonable period of time.
Do not let the system run out of disk space :). Keep track of disk usage, and proactively drop older data in extreme circumstances.

Harddisk working principle

I have 10 bytes data to write to a file, after my program writes 9 bytes and 7 bits to hard disk, if electricity cuts, how many bytes can I read from this file after electricity arrives? 9 bytes or 10 bytes?
You can't say anything. There's too many layers of abstraction here. Your program often buffers, the OS buffers, the chipset buffers, the drive itself buffers, and at some point the data will be written.
When you ask for a hard sync on the data through something like fsync all you're getting is a confirmation that at least your data was written, no guarantee that nothing else was.
It takes non-zero amounts of time for this data to stream through all those layers and physically end up on your disk, SSD or otherwise. If power cuts at some point in this process and you haven't received a write confirmation the safe thing to assume is you do not know how much was written. You'll have to inspect whatever files you were writing to before and see what data is present.
When your system reboots it will probably have to recover from the journal anyway, and any uncommitted changes will be rolled back. In your example the number of bytes actually written is zero.

Resources