What is maximum length for synmemo? - delphi

Synmemo at Sourceforge seems to be very good txt editor and code highlighter. It is a pity that it does not upgrade for long. It is a pure vcl. I want to know what is its maximum length. What is the largest txt file it can load?
Thanks

On a 32bit operating system you can load ~2GB of text file in the editor(not recommended), if you're running a 64 bit os, have a look here Why 2 GB memory limit when running in 64 bit Windows? and here http://cc.embarcadero.com/Item/24309 if you care to load more than 2GB of data in the syn editor.
From my experience I was able to load a couple of hundreds of megs without an issue, but the component becomes less and less responsive depending on how much you really need to load. ~80mb is super fast to load and play with.
I hope this helps.

Related

How can I monitor peak memory usage for a Delphi application?

I just ended a major refactor in my Delphi application and wanted to compare peak memory usage between builds. Basically I need proof that the latest refactor takes less RAM that the previous build. Since the application changed so much it's hard to pinpoint an equivalent point in time to compare metrics. The best way to compare would be to know the highest memory consumption during the application execution. For example, if my application need 1 MB of RAM for the whole duration, but during 1 ms it needed 2 MB I want to get 2 MB as the result.
I started using FastMM4, but I'm not sure if it can do what I need. It can be an external tool or something I embed to my application (à la FastMM4).
You can use Process Explorer.
Right-click on the top header, then use the Select Columns menu and check Peak Private Bytes from the Process Memory tab.
Process Explorer as recommended by dwrbudr is nice, but it lacks the granularity I needed, so I ended up using FastMM4 to get the memory usage during the whole flow of each build. I just logged the values and then compared the evolution manually.

Get available memory for a process

I use Delphi 2007, so there is a 32-bit limit of available memory.
Using the IMAGE_FILE_LARGE_ADDRESS_AWARE PE flag, there should be a 3 GB limit instead of 2 GB:
{$SetPEFlags IMAGE_FILE_LARGE_ADDRESS_AWARE} // Allows usage of more than 2GB memory
This is the method I use to get the current memory usage of the process:
function MemoryUsed: Int64;
var
PMC: _PROCESS_MEMORY_COUNTERS_EX;
begin
Win32Check(GetProcessMemoryInfo(GetCurrentProcess, #PMC, SizeOf(PMC)));
Result := PMC.PrivateBytes;
end;
Now I want a way to get the total amount of available memory for the process. It should be around 3 GB. But I don't want to hardcode it, as in the future we will move to new Delphi and 64-bit.
What Win32 API function should I use?
Available memory - Computers available memory - Maybe 8 GB RAM is installed. If more is required the OS start to swap memory to disk.
Process available memory - A limitation in executable and Windows. Now most Windows is 64-bit so that is not a problem. But if executable is compiled as 32-bit with IMAGE_FILE_LARGE_ADDRESS_AWARE, the limit should be 3 GB, right? When executable is 64-bit, it will be much larger, maybe 64 GB (but then swapping may happen if installed RAM is less...).
So my question is, how can I get the process's available memory?
There are a couple of obvious things you can do. Call GetSystemInfo and subtract lpMinimumApplicationAddress from lpMaximumApplicationAddress to find the amount of address space available to your process.
The amount of physical memory available to you is much harder to obtain, and is not a fixed quantity. You are competing with all the other processes for that, and so this is a very fluid and dynamic concept. You can find out how much physical memory is available on the system by calling GlobalMemoryStatusEx. That returns other information too but it's very easy to misinterpret it. In fact this API will also tell you how much virtual memory is available to your process which would give you the same information as in the first paragraph.
Perhaps what you want is the minimum of the total physical and total virtual memory. But I would not like to say. I've seen many examples of code that needlessly limits its ability to perform by taking bad decisions based on misinterpreted memory statistics.

Neo4j inserting large files - huge difference in time between

I am inserting a set of files (pdfs, of each 2 MB) in my database.
Inserting 100 files at once takes +- 15 seconds, while inserting 250 files at once takes 80 seconds.
I am not quite sure why this big difference is happening, but I assume it is because the amount of free memory is full between this amount. Could this be the problem?
If there is any more detail I can provide, please let me know.
Not exactly sure of what is happening on your side but it really looks like what is described here in the neo4j performance guide.
It could be:
Memory issues
If you are experiencing poor write performance after writing some data
(initially fast, then massive slowdown) it may be the operating system
that is writing out dirty pages from the memory mapped regions of the
store files. These regions do not need to be written out to maintain
consistency so to achieve highest possible write speed that type of
behavior should be avoided.
Transaction size
Are you using multiple transactions to upload your files ?
Many small transactions result in a lot of I/O writes to disc and
should be avoided. Too big transactions can result in OutOfMemory
errors, since the uncommitted transaction data is held on the Java
Heap in memory.
If you are on linux, they also suggest some tuning to improve performance. See here.
You can look up the details on the page.
Also, if you are on linux, you can check memory usage by yourself during import by using this command:
$ free -m
I hope this helps!

What allocating method to use for a high volume logger application?

I'm developing a logger/sniffer using Delphi. During operation I get hugh amounts of data, that can accumulate during stress operations to around 3 GB of data.
On certain computers when we get to those levels the application stops functioning and sometimes throws exceptions.
Currently I'm using GetMem function to allocate the pointer to each message.
Is there a better way to allocate the memory so I could minimize the chances for failure? Keep in mind that I can't limit the size to a hard limit.
What do you think about using HeapAlloc, VirtualAlloc or maybe even mapped files? Which would be better?
Thank you.
Your fundamental problem is the hard address space limit of 4GB for 32 bit processes. Since you are hitting problems at 3GB I can only presume that you are using /LARGEADDRESSAWARE running on 64 bit Windows or 32 bit Windows with the /3GB boot switch.
I think you have a few options, including but not limited to the following:
Use less memory. Perhaps you can process in smaller chunks or push some of the memory to disk.
Use 64 bit Delphi (just released) or FreePascal. This relieves you of the address space constraint but constrains you to 64 bit versions of Windows.
Use memory mapped files. On a machine with a lot of memory this is a way of getting access to the OS memory cache. Memory mapped files are not for the faint hearted.
I can't advise definitively on a solution since I don't know your architecture but in my experience, reducing your memory footprint is often the best solution.
Using a different allocator is likely to make little difference. Yes it is true that there are low-fragmentation allocators but they surely won't really solve your problem. All they could do would be make it slightly less likely to arise.

What is the fastest way for reading huge files in Delphi?

My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes.
The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this:
FileStream.Position := Offset;
MemoryStream.CopyFrom(FileStream, Size);
This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size.
The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue.
I am using Delphi 2007 on a Windows XP box.
What can I do to speed up this file access?
edit:
The file access is slow for large files, regardless of which part of the file is being read.
The program usually does not read the file sequentially. The order of the chunks is user driven and cannot be predicted.
It is always slower to read a chunk from a large file than to read an equally large chunk from a small file.
I am talking about the performance for reading a chunk from the file, not about the overall time it takes to process a whole file. The latter would obviously take longer for larger files, but that's not the issue here.
I need to apologize to everybody: After I implemented file access using a memory mapped file as suggested it turned out that it did not make much of a difference. But it also turned out after I added some more timing code that it is not the file access that slows down the program. The file access takes actually nearly constant time regardless of the file size. Some part of the user interface (which I have yet to identify) seems to have a performance problem with large amounts of data and somehow I failed to see the difference when I first timed the processes.
I am sorry for being sloppy in identifying the bottleneck.
If you open help topic for CreateFile() WinAPI function, you will find interesting flags there such as FILE_FLAG_NO_BUFFERING and FILE_FLAG_RANDOM_ACCESS . You can play with them to gain some performance.
Next, copying the file data, even 100Kb in size, is an extra step which slows down operations. It is a good idea to use CreateFileMapping and MapViewOfFile functions to get the ready for use pointer to the data. This way you avoid copying and also possibly get certain performance benefits (but you need to measure speed carefully).
Maybe you can take this approach:
Sort the entries on max fileposition and then to the following:
Take the entries that only need the first X MB of the file (till a certain fileposition)
Read X MB from the file into a buffer (TMemorystream
Now read the entries from the buffer (maybe multithreaded)
Repeat this for all the entries.
In short: cache a part of the file and read all entries that fit into it (multhithreaded), then cache the next part etc.
Maybe you can gain speed if you just take your original approach, but sort the entries on position.
The stock TMemoryStream in Delphi is slow due to the way it allocates memory. The NexusDB company has TnxMemoryStream which is much more efficient. There might be some free ones out there that work better.
The stock Delphi TFileStream is also not the most efficient component. Wayback in history Julian Bucknall published a component named BufferedFileStream in a magazine or somewhere that worked with file streams very efficiently.
Good luck.

Resources