My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes.
The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this:
FileStream.Position := Offset;
MemoryStream.CopyFrom(FileStream, Size);
This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size.
The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue.
I am using Delphi 2007 on a Windows XP box.
What can I do to speed up this file access?
edit:
The file access is slow for large files, regardless of which part of the file is being read.
The program usually does not read the file sequentially. The order of the chunks is user driven and cannot be predicted.
It is always slower to read a chunk from a large file than to read an equally large chunk from a small file.
I am talking about the performance for reading a chunk from the file, not about the overall time it takes to process a whole file. The latter would obviously take longer for larger files, but that's not the issue here.
I need to apologize to everybody: After I implemented file access using a memory mapped file as suggested it turned out that it did not make much of a difference. But it also turned out after I added some more timing code that it is not the file access that slows down the program. The file access takes actually nearly constant time regardless of the file size. Some part of the user interface (which I have yet to identify) seems to have a performance problem with large amounts of data and somehow I failed to see the difference when I first timed the processes.
I am sorry for being sloppy in identifying the bottleneck.
If you open help topic for CreateFile() WinAPI function, you will find interesting flags there such as FILE_FLAG_NO_BUFFERING and FILE_FLAG_RANDOM_ACCESS . You can play with them to gain some performance.
Next, copying the file data, even 100Kb in size, is an extra step which slows down operations. It is a good idea to use CreateFileMapping and MapViewOfFile functions to get the ready for use pointer to the data. This way you avoid copying and also possibly get certain performance benefits (but you need to measure speed carefully).
Maybe you can take this approach:
Sort the entries on max fileposition and then to the following:
Take the entries that only need the first X MB of the file (till a certain fileposition)
Read X MB from the file into a buffer (TMemorystream
Now read the entries from the buffer (maybe multithreaded)
Repeat this for all the entries.
In short: cache a part of the file and read all entries that fit into it (multhithreaded), then cache the next part etc.
Maybe you can gain speed if you just take your original approach, but sort the entries on position.
The stock TMemoryStream in Delphi is slow due to the way it allocates memory. The NexusDB company has TnxMemoryStream which is much more efficient. There might be some free ones out there that work better.
The stock Delphi TFileStream is also not the most efficient component. Wayback in history Julian Bucknall published a component named BufferedFileStream in a magazine or somewhere that worked with file streams very efficiently.
Good luck.
Related
I've been playing around with the raw data inside an 8GB Memory Stick, reading and writing directly into specific sectors, but for some reason changes don't remain consistent.
I've used Active # Disk Editor to write a string at a specific sector and it seems consistent when I read it through Active (it survives unmounting, rebooting...), but if I try to read it through terminal using dd and hexdump the outcome is different.
Some time ago I was researching ways to fully and effectively erase a disk and I read somewhere that solid state drives such as flash drives or SSDs have more memory than it's stated so its internals keep replacing parts of the memory in order to increase lifespan or something like that.
I don't know if it is because of that or if it's even correct. Could you tell me if I'm wrong or where to find good documentation about the subject?
Okay I just figured it out.
Apparently when you open a disk inside a Hex editor there's two ways you can go, you can open it as a physical disk (the whole disk) or as a logical disk, aka a volume or a partition.
Active # Disk Editor was opening it as a physical disk, while using dd and hexdump was dumping it as a logical disk. In other words it was dumping the content of the only partition inside the physical disk. This means there was an offset between the real physical sectors where I was writing data using Active and the ones that I was reading (quite a big offset, 2048 sectors of 512 bytes each).
So changes were being made, I was just looking at the wrong positions. Hope this saves someone a few minutes.
I am frankly stumped. This is beyond my experience.
I have a C# MVC program that generates a zip file in a MemoryStream for downloading. The action method is called by a button click to JavaScript.
The only problem is that in some cases the potential file size can easily exceed one Gig and from my reading, that is a common problem. I've tried upping the Maximum Allowed Content Length to 3000000000 in Request Filtering on IIS (IIS8). I've tried adding requestLimits maxAllowedContentLength to my web.config. I've even tried breaking up the zip through multiple calls to the action method (without success), although I have yet to get any confirmation/denial that this is even possible.
Is there any setting within IIS or my web.config that I could be overlooking? Could this be a company network issue, not solvable on an app developer's level?
Okay, so it's kind of hard to explain big concepts in 400 characters or less, so I think I'm just causing more confusion sticking in the comments section. Besides, I think we're close enough here to an "answer" as you're likely to get.
The default constructor of MemoryStream essentially sets the initial size to 0. In reality, the initial size is set to somewhere around 256, but since the initial size is mostly a guide, and it doesn't actually claim that space until its needed, it starts at 0.
Each time you write to the stream, it checks how much is being written versus the remaining size of the buffer array. If it can't fit the write, it creates a new, larger buffer array and copies the old buffer array into that. In this way, setting an initial size can help somewhat, in that you start off with a larger initial buffer array and you may not need to grow that buffer. You might have a better chance of getting a contiguous block of memory, which I'll explain the importance of in a bit, but that actually kind of works against you, as well. If you only need 1MB for the file, but you're initializing with 100MB and there's not 100MB of contiguous memory, you'll get an OutOfMemoryException, even though there might be 1MB of contiguous memory available.
Regardless of whether you initialize or not, there remains certain immutable facts. First, MemoryStream requires contiguous memory. Even if you technically have memory available on the system, it's possible you might not have large blocks of available memory. In other words if you have 4GB available, but it's all fragmented, even trying to create a 1GB stream in memory could fail, simply because it can't reserve 1GB of contiguous memory. Obviously, the larger the file you're tying to create in memory, the greater the chances that you're going to run into this issue. For this reason alone, I would say you're out of luck without raising the amount of system RAM. With 8GB and probably only 4-6GB actually available to IIS and then split up between worker processes and threads, the odds that you're going to be able to claim 25% or so of the available RAM as contiguous space, is highly unlikely.
The next immutable fact may or may not be relevant, but since you haven't specified, I'll mention it. If your web app is deployed as 32-bit, you'll have a hard limit of 2GB for any object, meaning a MemoryStream could never house more than 2GB (actually around 1.3-1.6GB as .NET code consumes some of that address space), and any attempt to make it do so will result in an OutOfMemoryException, even if you had some ridiculous amount of RAM on the system like 1TB+. If your app is 64-bit, this is less likely an issue as you can address a ton more memory, assuming it's compiled properly. You'd have to pretty much try to screw that up, though, so you should be fine.
Finally, multiple writes can cause an issue as well. As I said previously, the buffer array resizes (if necessary) in response to writes. Each time it resizes, the new buffer array must also be able to fit in contiguous address space. As a result, multiple resizes can cause you to bump into an OutOfMemoryException you wouldn't have hit if you had written all the data from the start. This is where initializing the MemoryStream can be helpful, but as I said before, it's also a double-edged sword, as your initial buffer size might be too great to begin with and you end up with an exception where you may have not had one letting it grow organically. Long and short, try to write everything to the stream in one go rather than piecemeal.
A piece of software I'm working on outputs quite a lot of files which are the stored on a server. During its runtime I've had one file go corrupt on me. These files are critical to the operation, so this cannot happen. I'm therefore trying to come up with a way of adding error correction to the files to prevent this from ever happening again.
I've read up on Reed-Solomon, which encodes k blocks of data plus m blocks of parity, and can then reconstruct up to m missing blocks. So what I'm thinking is taking the data stream, split it into these blocks, and then store them in sequence on disk, first the data blocks, then the parity blocks. Repeat until entire file is stored. k, m, and block sizes are of course variables I'll have to investigate and play with.
However, it's my understanding that Reed-Solomon requires you to know which blocks are corrupt. How could I possibly know that? My thinking is I'd have to add some extra, simpler, error detection code to each of the blocks as I write them, otherwise I can't know if they're corrupted. Like CRC32 or something.
Have I understood this correctly, or is there a better way to accomplish this?
This is a bit of an older question, but (in my mind) is always something that is useful and in some cases necessary. Bit rot will never be completely cured (hush ZFS community; ZFS only has control of what's on it's filesystem while it's there), so we always have to come up with proactive prevention and recovery plans.
While it was designed to facilitate piracy (specifically storing and extracting multi-GB files in chunks on newsgroups where any chunk could go missing or be corrupted), "Parchives" are actually exactly what you're looking for (see the white paper, though don't implement that scheme directly as it has a bug and newer schemes are available), and they work in practice as follows:
The complete file is input in to the encoder
Blocks are processed and Reed-Solomon blocks are generated
.par files containing those blocks are output along side the original file
When integrity is checked (typically on the other end of a file transfer), the blocks are rechecked and any blocks that need to be used to reconstruct missing data are pulled from the .par files.
Things eventually settled in to "PAR2" (essentially a rewrite with additional features) with the following scheme:
Large file compressed with RAR and split in to chunks (typically around 100MB each as that was a "usually safe" max of usenet)
An "index" file is placed along side the file (for example bigfile.PAR2). This has no recovery chunks.
A series of par files totaling 10% of the original data size are along side in increasingly larger filesizes (bigfile.vol029+25.PAR2, bigfile.vol104+88.PAR2, etc)
The person on the other end can then gets all .rar files
An integrity check is run, and returns a MB count of out how much data needs recovery
.PAR2 files are downloaded in an amount equal to or greater than the need
Recovery is done and integrity verified
RAR is extracted, and the original file is successfully transferred
Now without a filesystem layer this system is still fairly trivial to implement using the Parchive tools, but it has two requirements:
That the files do not change (as any change to the file on-disk will invalidate the parity data (of course you could do this and add complexity with a copy-on-change writing scheme))
That you run both the file generation and integrity check/recovery when appropriate.
Since all the math and methods are both known and battle-tested, you can also roll your own to meet whatever needs to have (as a hook in to file read/write, spanning arbitrary path depths, storing recovery data on a separate drive, etc). For initial tips, refer to the pros: https://www.backblaze.com/blog/reed-solomon/
Edit: The same research that led me to this question led me to a whole subset of already-done work that I was previously unaware of
https://crates.io/crates/solana-reed-solomon-erasure (as well as a bunch of other implementations in the Rust crate registry)
https://github.com/klauspost/reedsolomon (based on the BackBlaze code, and processes 1Gbps per core)
Etc. Look for "Reed-Solomon file recovery "
I am inserting a set of files (pdfs, of each 2 MB) in my database.
Inserting 100 files at once takes +- 15 seconds, while inserting 250 files at once takes 80 seconds.
I am not quite sure why this big difference is happening, but I assume it is because the amount of free memory is full between this amount. Could this be the problem?
If there is any more detail I can provide, please let me know.
Not exactly sure of what is happening on your side but it really looks like what is described here in the neo4j performance guide.
It could be:
Memory issues
If you are experiencing poor write performance after writing some data
(initially fast, then massive slowdown) it may be the operating system
that is writing out dirty pages from the memory mapped regions of the
store files. These regions do not need to be written out to maintain
consistency so to achieve highest possible write speed that type of
behavior should be avoided.
Transaction size
Are you using multiple transactions to upload your files ?
Many small transactions result in a lot of I/O writes to disc and
should be avoided. Too big transactions can result in OutOfMemory
errors, since the uncommitted transaction data is held on the Java
Heap in memory.
If you are on linux, they also suggest some tuning to improve performance. See here.
You can look up the details on the page.
Also, if you are on linux, you can check memory usage by yourself during import by using this command:
$ free -m
I hope this helps!
We are loading a large flat file into BizTalk Server 2006 (Original release, not R2) - about 125 MB. We run a map against it and then take each row and make a call out to a stored procedure.
We receive the OutOfMemoryException during orchestration processing, the Windows Service restarts, uses full 2 GB memory, and crashes again.
The server is 32-bit and set to use the /3GB switch.
Also I've separated the flow into 3 hosts - one for receive, the other for orchestration, and the third for sends.
Anyone have any suggestions for getting this file to process wihout error?
Thanks,
Krip
If this is a flat file being sent through a map you are converting it to XML right? The increase in size could be huge. XML can easily add a factor of 5-10 times over a flat file. Especially if you use descriptive or long xml tag names (which normally you would).
Something simple you could try is to rename the xml nodes to shorter names, depending on the number of records (sounds like a lot) it might actually have a pretty significant impact on your memory footprint.
Perhaps a more enterprise approach, would be to subdivide this in a custom pipeline into separate message packets that can be fed through the system in more manageable chunks (similar to what Chris suggests). Then the system throttling and memory metrics could take over. Without knowing more about your data it would be hard to say how to best do this, but with a 125 MB file I am guessing that you probably have a ton of repeating rows that do not need to be processed sequentially.
Where does it crash? Does it make it past the Transform shape? Another suggestion to try is to run the transform in the Receive Port. For more efficient processing, you could even debatch the message and have multiple simultaneous orchestration instances be calling the stored procs. This would definately reduce the memory profile and increase performance.