Can immutable be a memory hog? - memory

Let's say we have a memory-intensive class like an Image, with chainable methods like Resize() and ConvertTo().
If this class is immutable, won't it take a huge amount of memory when I start doing things like i.Resize(500, 800).Rotate(90).ConvertTo(Gif), compared to a mutable one which modifies itself? How to handle a situation like this in a functional language?

If this class is immutable, won't it take a huge amount of memory?
Typically your memory requirements for that single object might double, because you might have an "old copy" and a "new copy" live at once. So you can view this phenomenon, over the lifetime of the program, as having one more large object allocated than you might in a typical imperative program. (Objects that aren't being "worked on" just sit there, with the same memory requirements as in any other language.)
How to handle a situation like this in a functional language?
Do absolutely nothing. Or more accurately, allocate new objects in good health.
If you are using an implementation designed for functional programming, the allocator and garbage collector are almost certainly tuned for high allocation rates, and everything will be fine. If you have the misfortune to try to run functional code on the JVM, well, performance won't be quite as good as with a bespoke implementation, but for most programs it will still be fine.
Can you provide more detail?
Sure. I'm going to take an exceptionally simple example: 1000x1000 greyscale image with 8 bits per pixel, rotated 180 degrees. Here's what we know:
To represent the image in memory requires 1MB.
If the image is mutable, it's possible to rotate 180 degrees by doing an update in place. The amount of temporary space needed is enough to hold one pixel. You write a doubly nested loop that amounts to
for (i in columns) do
for (j in first half of rows) do {
pixel temp := a[i, j];
a[i, j] := a[width-i, height-j];
a[width-i, height-j] := tmp
}
If the image is immutable, it's required to create an entire new image, and temporarily you have to hang onto the old image. The code is something like this:
new_a = Image.tabulate (width, height) (\ x y -> a[width-x, height-y])
The tabulate function allocates an entire, immutable 2D array and initializes its contents. During this operation, the old image is temporarily occupying memory. But when tabulate completes, the old image a should no longer be used, and its memory is now free (which is to say, eligible for recycling by the garbage collector). The amount of temporary space required, then, is enough to hold one image.
While the rotation is going on, there's no need to have copies of objects of other classes; the temporary space is needed only for the image being rotated.
N.B. For other operations such as rescaling or rotating a (non-square) image by 90 degrees, it is quite likely that even when images are mutable, a temporary copy of the entire image is going to be necessary, because the dimensions change. On the other hand, colorspace transformations and other computations which are done pixel by pixel can be done using mutation with a very small temporary space.

Yes. Immutability is a component of the eternal time-space tradeoff in computing: you sacrifice memory in exchange for the increased processing speed you gain in parallelism by foregoing locks and other concurrent access control measures.
Functional languages typically handle operations of this nature by chunking them into very fine grains. Your Image class doesn't actually hold the logical data bits of the image; rather, it uses pointers or references to much smaller immutable data segments which contain the image data. When operations need to be performed on the image data, the smaller segments are cloned and mutated, and a new copy of the Image is returned with updated references -- most of which point to data which has not been copied or changed and has remained intact.
This is one reason why functional design requires a different fundamental thought process from imperative design. Not only are algorithms themselves laid out very differently, but data storage and structures need to be laid out differently as well to account for the memory overhead of copying.

In some cases, immutability forces you to clone the object and needs to allocate more memory. It doesn't necessary occupy the memory, because older copies can be discarded. For example, the CLR garbage collector deals with this situation quite well, so this isn't (usually) a big deal.
However, chaining of operations doesn't actually mean cloning the object. This is certainly the case for functional lists. When you use them in the typical way, you only need to allocate a memory cell for a single element (when appending elements to the front of the list).
Your example with image processing can be also implemented in a more efficient way. I'll use C# syntax to keep the code easy to understand without knowing any FP (but it would look better in a usual functional language). Instead of actually cloning the image, you could just store the operations that you want to do with the image. For example something like this:
class Image {
Bitmap source;
FileFormat format;
float newWidth, newHeight;
float rotation;
// Public constructor to load the image from a file
public Image(string sourceFile) {
this.source = Bitmap.FromFile(sourceFile);
this.newWidth = this.source.Width;
this.newHeight = this.source.Height;
}
// Private constructor used by the 'cloning' methods
private Image(Bitmap s, float w, float h, float r, FileFormat fmt) {
source = s; newWidth = w; newHeight = h;
rotation = r; format = fmt;
}
// Methods that can be used for creating modified clones of
// the 'Image' value using method chaining - these methods only
// store operations that we need to do later
public Image Rotate(float r) {
return new Image(source, newWidth, newHeight, rotation + r, format);
}
public Image Resize(float w, float h) {
return new Image(source, w, h, rotation, format);
}
public Image ConvertTo(FileFormat fmt) {
return new Image(source, newWidth, newHeight, rotation, fmt);
}
public void SaveFile(string f) {
// process all the operations here and save the image
}
}
The class doesn't actually create a clone of the entire bitmap each time you invoke a method. It only keeps track of what needs to be done later, when you'll finally try to save the image. In the following example, the underlying Bitmap would be created only once:
var i = new Image("file.jpg");
i.Resize(500, 800).Rotate(90).ConvertTo(Gif).SaveFile("fileNew.gif");
In summary, the code looks like you're cloning the object and you're actually creating a new copy of the Image class each time you call some operation. However, that doesn't mean that the operation is memory expensive - this can be hidden in the functional library, which can be implemented in all sorts of ways (but still preserv the important referential transparency).

It depends on the type of data structures used, their application in a given program. In general, immutability does not have to be overly expensive on memory.
You may have noticed that the persistent data structures used in functional programs tend to eschew arrays. This is because persistent data structures typically reuse most of their components when they are "modified". (They are not really modified, of course. A new data structure is returned, but the old one is just the same as it was.) See this picture to get an idea of how the structure sharing can work. In general, tree structures are favoured, because a new immutable tree can be created out of an old immutable tree only rewriting the path from the root to the node in question. Everything else can be reused, making the process efficient in both time and memory.
In regards to your example, there are several ways to solve the problem other than copying a whole massive array. (That actually would be horribly inefficient.) My preferred solution would be to use a tree of array chunks to represent the image, allowing for relatively little copying on updates. Note an additional advantage: we can at relatively small cost store multiple versions of our data.
I don't mean to argue that immutability is always and everywhere the answer -- the truth and righteousness of functional programming should be tempered with pragmatism, after all.

Yes one of the disadvantage of using immutable objects is that they tend to hog memory, One thing that comes to my mind is something similar to lazy evaluation, which is when a new copy is requested provide a reference and when the user does some changes then initialize the new copy of the object.

Short, tangential answer: in FP language I'm familiar with (scala, erlang, clojure, F#), and for the usual data structures: arrays, lists, vectors, tuples, you need to understand shallow/deep copies and how implemented:
e.g.
Scala, clone() object vs. copy constructor
Does Scala AnyRef.clone perform a shallow or deep copy?
Erlang: message passing a shallow-copied data structure can blow up a process:
http://groups.google.com/group/erlang-programming/msg/bb39d1a147f72800

Related

Array of references that share an Arc

This one's kind of an open ended design question I'm afraid.
Anyway: I have a big two-dimensional array of stuff. This array is mutable, and is accessed by a bunch of threads. For now I've just been dealing with this as a Arc<Mutex<Vec<Vec<--owned stuff-->>>>, which has been fine.
The problem is that stuff is about to grow considerably in size, and I'll want to start holding references rather than complete structures. I could do this by inverting everything and going to Vec<Vec<Arc<Mutex>>, but I feel like that would be a ton of overhead, especially because each thread would need a complete copy of the grid rather than a single Arc/Mutex.
What I want to do is have this be an array of references, but somehow communicate that the items being referenced all live long enough according to a single top-level Arc or something similar. Is that possible?
As an aside, is Vec even the correct data type for this? For the grid in particular I really want a large, fixed-size block of memory that will live for the entire length of the program once it's initialized, and has a lot of reference locality (along either dimension.) Is there something else/more specialized I should be using?
EDIT:Giving some more specifics on my code (away from home so this is rough):
What I want:
Outer scope initializes a bunch of Ts and somehow collectively ensures they live long enough (that's the hard part)
Outer scope initializes a grid :Something<Vec<Vec<&T>>> that stores references to the Ts
Outer scope creates a bunch of threads and passes grid to them
Threads dive in and out of some sort of (problable RW) lock on grid, reading the Tsand changing the &Ts in the process.
What I have:
Outer thread creates a grid: Arc<RwLock<Vector<Vector<T>>>>
Arc::clone(& grid)s are passed to individual threads
Read-heavy threads mostly share the lock and sometimes kick each other out for the writes.
The only problem with this is that the grid is storing actual Ts which might be problematically large. (Don't worry too much about the RwLock/thread exclusivity stuff, I think it's perpendicular to the question unless something about it jumps out at you.)
What I don't want to do:
Top level creates a bunch of Arc<Mutex<T>> for individual T
Top level creates a `grid : Vec<Vec<Arc<Mutex>>> and passes it to threads
The problem with that is that I worry about the size of Arc/Mutex on every grid element (I've been going up to 2000x2000 so far and may go larger). Also while the threads would lock each other out less (only if they're actually looking at the same square), they'd have to pick up and drop locks way more as they explore the array, and I think that would be worse than my current RwLock implementation.
Let me start of by your "aside" question, as I feel it's the one that can be answered:
As an aside, is Vec even the correct data type for this? For the grid in particular I really want a large, fixed-size block of memory that will live for the entire length of the program once it's initialized, and has a lot of reference locality (along either dimension.) Is there something else/more specialized I should be using?
The documenation of std::vec::Vec specifies that the layout is essentially a pointer with size information. That means that any Vec<Vec<T>> is a pointer to a densely packed array of pointers to densely packed arrays of Ts. So if block of memory means a contiguous block to you, then no, Vec<Vec<T>> cannot give that you. If that is part of your requirements, you'd have to deal with a datatype (let's call it Grid) that is basically a (pointer, n_rows, n_columns) and define for yourself if the layout should be row-first or column-first.
The next part is that if you want different threads to mutate e.g. columns/rows of your grid at the same time, Arc<Mutex<Grid>> won't cut it, but you already figured that out. You should get clarity whether you can split your problem such that each thread can only operate on rows OR columns. Remember that if any thread holds a &mut Row, no other thread must hold a &mut Column: There will be an overlapping element, and it will be very easy for you to create a data races. If you can assign a static range of of rows to a thread (e.g. thread 1 processes rows 1-3, thread 2 processes row 3-6, etc.), that should make your life considerably easier. To get into "row-wise" processing if it doesn't arise naturally from the problem, you might consider breaking it into e.g. a row-wise step, where all threads operate on rows only, and then a column-wise step, possibly repeating those.
Speculative starting point
I would suggest that your main thread holds the Grid struct which will almost inevitably be implemented with some unsafe methods, e.g. get_row(usize), get_row_mut(usize) if you can split your problem into rows/colmns or get(usize, usize) and get(usize, usize) if you can't. I cannot tell you what exactly these should return, but they might even be custom references to Grid, which:
can only be obtained when the usual borrowing rules are fulfilled (e.g. by blocking the thread until any other GridRefMut is dropped)
implement Drop such that you don't create a deadlock
Every thread holds a Arc<Grid>, and can draw cells/rows/columns for reading/mutating out of the grid as needed, while the grid itself keeps book of references being created and dropped.
The downside of this approach is that you basically implement a runtime borrow-checker yourself. It's tedious and probably error-prone. You should browse crates.io before you do that, but your problem sounds specific enough that you might not find a fitting solution, let alone one that's sufficiently documented.

boost lockfree spsc_queue cache memory access

I need to be extremely concerned with speed/latency in my current multi-threaded project.
Cache access is something I'm trying to understand better. And I'm not clear on how lock-free queues (such as the boost::lockfree::spsc_queue) access/use memory on a cache level.
I've seen queues used where the pointer of a large object that needs to be operated on by the consumer core is pushed into the queue.
If the consumer core pops an element from the queue, I presume that means the element (a pointer in this case) is already loaded into the consumer core's L2 and L1 cache. But to access the element, does it not need to access the pointer itself by finding and loading the element either from either the L3 cache or across the interconnect (if the other thread is on a different cpu socket)? If so, would it maybe be better to simply send a copy of the object that could be disposed of by the consumer?
Thank you.
C++ principally a pay-for-what-you-need eco-system.
Any regular queue will let you choose the storage semantics (by value or by reference).
However, this time you ordered something special: you ordered a lock free queue.
In order to be lock free, it must be able to perform all the observable modifying operations as atomic operations. This naturally restricts the types that can be used in these operations directly.
You might doubt whether it's even possible to have a value-type that exceeds the system's native register size (say, int64_t).
Good question.
Enter Ringbuffers
Indeed, any node based container would just require pointer swaps for all modifying operations, which is trivially made atomic on all modern architectures.
But does anything that involves copying multiple distinct memory areas, in non-atomic sequence, really pose an unsolvable problem?
No. Imagine a flat array of POD data items. Now, if you treat the array as a circular buffer, one would just have to maintain the index of the buffer front and end positions atomically. The container could, at leisure update in internal 'dirty front index' while it copies ahead of the external front. (The copy can use relaxed memory ordering). Only as soon as the whole copy is known to have completed, the external front index is updated. This update needs to be in acq_rel/cst memory order[1].
As long as the container is able to guard the invariant that the front never fully wraps around and reaches back, this is a sweet deal. I think this idea was popularized in the Disruptor Library (of LMAX fame). You get mechanical resonance from
linear memory access patterns while reading/writing
even better if you can make the record size aligned with (a multiple) physical cache lines
all the data is local unless the POD contains raw references outside that record
How Does Boost's spsc_queue Actually Do This?
Yes, spqc_queue stores the raw element values in a contiguous aligned block of memory: (e.g. from compile_time_sized_ringbuffer which underlies spsc_queue with statically supplied maximum capacity:)
typedef typename boost::aligned_storage<max_size * sizeof(T),
boost::alignment_of<T>::value
>::type storage_type;
storage_type storage_;
T * data()
{
return static_cast<T*>(storage_.address());
}
(The element type T need not even be POD, but it needs to be both default-constructible and copyable).
Yes, the read and write pointers are atomic integral values. Note that the boost devs have taken care to apply enough padding to avoid False Sharing on the cache line for the reading/writing indices: (from ringbuffer_base):
static const int padding_size = BOOST_LOCKFREE_CACHELINE_BYTES - sizeof(size_t);
atomic<size_t> write_index_;
char padding1[padding_size]; /* force read_index and write_index to different cache lines */
atomic<size_t> read_index_;
In fact, as you can see, there are only the "internal" index on either read or write side. This is possible because there's only one writing thread and also only one reading thread, which means that there could only be more space at the end of write operation than anticipated.
Several other optimizations are present:
branch prediction hints for platforms that support it (unlikely())
it's possible to push/pop a range of elements at once. This should improve throughput in case you need to siphon from one buffer/ringbuffer into another, especially if the raw element size is not equal to (a whole multiple of) a cacheline
use of std::unitialized_copy where possible
The calling of trivial constructors/destructors will be optimized out at instantiation time
the unitialized_copy will be optimized into memcpy on all major standard library implementations (meaning that e.g. SSE instructions will be employed if your architecture supports it)
All in all, we see a best-in-class possible idea for a ringbuffer
What To Use
Boost has given you all the options. You can elect to make your element type a pointer to your message type. However, as you already raised in your question, this level of indirection reduces locality of reference and might not be optimal.
On the other hand, storing the complete message type in the element type could become expensive if copying is expensive. At the very least try to make the element type fit nicely into a cache line (typically 64 bytes on Intel).
So in practice you might consider storing frequently used data right there in the value, and referencing the less-of-used data using a pointer (the cost of the pointer will be low unless it's traversed).
If you need that "attachment" model, consider using a custom allocator for the referred-to data so you can achieve memory access patterns there too.
Let your profiler guide you.
[1] I suppose say for spsc acq_rel should work, but I'm a bit rusty on the details. As a rule, I make it a point not to write lock-free code myself. I recommend anyone else to follow my example :)

C-Struct vs Object

I am currently working on a Conway's Game of Life simulator for the iPhone and I had a few questions about memory management. Note that I am using ARC.
For my application, I am going to need a large amount of either C style structs or Objective-C objects to represent cells. There may be a couple thousand of these, so obviously, memory management came to mind.
Structs My argument for structs is that the cells do not need typical OO properties. The only thing that they will be holding is two BOOL values, so there will not be huge amount of memory chewed up by these cells. Also, I need to utilize a two-dimensional array. With structs, I can use the C-style 2d arrays. As far as I know, there is no replacement for this in Objective-C. I feel that it is overkill to create an object for just two boolean values.
Objective-C objects My argument (and most other people's) is that the memory management around Objective-C objects is very easy and efficient with ARC. Also, I have seen arguments that a struct is not such a big memory reduction to an object.
So, my question. Should I go with the old-school, lean, and compatible with two-dimensional array structs? Or should I stick with the typical Objective-C objects and risk the extra memory used.
Afterthoughts: If you recommend Objective-C objects, provide an alternate storage method that represents a two-dimensional array. This is critical and is one of the biggest downsides of going with Objective-C objects.
Thankyou.
"Premature optimization is the root of all evil"... If you are trying to build a Game of Life server with 100,000 users playing concurrently, memory footprint might matter. For a single-person implementation on any modern device, even a mobile one, memory size is pretty academic.
Therefore, do whatever either gets the game up and running fastest or (better) makes the code most readable and maintainable. Human cycles cost more than computer cycles. Suppose you needed a third boolean for each cell of the game... wouldn't an object you could extend save a ton of time rather than hardcoded array indices? (A struct is a lot better than an array of primitives for this reason...)
I've certainly used denser representations of data when I need to, but the overhead in programmer time has to be worth it. Just my $.02...
If it is just 2 BOOL values that you are going to store for every cell, then you could just use an array of integers to do the job. For example:
Let us assume that the two bool values are boolX and boolY, we could combine them into an int as:
int combinedBool = boolY + (10*boolX);
So you can retrieve the two bool values like:
BOOL boolX, boolY;
boolX = combinedBool/10;
boolY = combinedBool%10;
And then you can store the whole board in the form a single dimension array of integers with the index of each cell represented by ((yIndex*width)+xIndex) where width is the number of cells left-to-right on your board and, xIndex and yIndex represent the X and Y coordinates of the cell on your board.
Hope this helps with your memory management and cell organisation.
You could build one and test it's size with malloc_size(myObject). Thousands of pairs of bools will be small enough. In fact, you'll be able to make the objects larger and enjoy the benefits of the OO design. For example, what if the cells also kept pointers to their neighboring cells. The cells could compute their own t+1 state with cached access to their neighbors.

Best practice for dealing with package allocation in Go

I'm writing a package which makes heavy use of buffers internally for temporary storage. I have a single global (but not exported) byte slice which I start with 1024 elements and grow by doubling as needed.
However, it's very possible that a user of my package would use it in such a way that caused a large buffer to be allocated, but then stop using the package, thus wasting a large amount of allocated heap space, and I would have no way of knowing whether to free the buffer (or, since this is Go, let it be GC'd).
I've thought of three possible solutions, none of which is ideal. My question is: are any of these solutions, or maybe ones I haven't thought of, standard practice in situations like this? Is there any standard practice? Any other ideas?
Screw it.
Oh well. It's too hard to deal with this, and leaving allocated memory lying around isn't so bad.
The problem with this approach is obvious: it doesn't solve the problem.
Exported "I'm done" or "Shrink internal memory usage" function.
Export a function which the user can call (and calling it intelligently is obviously up to them) which will free the internal storage used by the package.
The problem with this approach is twofold. First, it makes for a more complex, less clean interface to the user. Second, it may not be possible or practical for the user to know when calling such a function is wise, so it may be useless anyway.
Run a goroutine which frees the buffer after a certain period of the package going unused, or which shrinks the buffer (perhaps halving the length) whenever its size hasn't been increased in a while.
The problem with this approach is primarily that it puts unnecessary strain on the scheduler. Obviously a single goroutine isn't so bad, but if this were accepted practice, it wouldn't scale well if every package you imported were doing this under the hood. Also, if you have a time-sensitive application, you may not want code running when you're not aware of it (that is, you may assume that the package isn't doing any work when its functions are not being called - a reasonable assumption, I'd say).
So... any ideas?
NOTE: You can see the existing project here (the relevant code is only a few tens of lines).
A common approach to this is letting the client pass an existing []byte (or whatever) as an argument to some call/function/method. For example:
// The returned slice may be a sub-slice of dst if dst was large enough
// to hold the entire encoded block. Otherwise, a newly allocated slice
// will be returned. It is valid to pass a nil dst.
func Foo(dst []byte, whatever Bar) (ret []byte, err error)
(Example)
Another approach is to get a new []byte from a, for example cache and/or for example pool (if you prefer the later name for that concept) and rely on clients to return used buffers to such "recycle-bin".
BTW: You're doing it right by thinking about this. Where it's possible to reasonably reuse []byte buffers, there's a potential for lowering the GC load and thus making your program better performing. Sometimes the difference can be critical.
You could reslice your buffer at the end of every operation.
buffer = buffer[:0]
Then your function extendAndSliceBuffer would have the original backing array most likely available if it needs to grow. If not, you would suffer a new allocation, which you might get anyway when you do extendAndSliceBuffer.
Overall, I think a cleaner solution is to do like #jnml said and let the users pass their own buffer if they care about performance. If they don't care about performance, then you should not use a global var and simply allocate the buffer as you need and let it go when it gets out of scope.
I have a single global (but not exported) byte slice which I start
with 1024 elements and grow by doubling as needed.
And there's your problem. You shouldn't have a global like this in your package.
Generally the best approach is to have an exported struct with attached functions. The buffer should reside in this struct unexported. That way the user can instantiate it and let the garbage collector clean it up when they let go of it.
You also want to avoid requiring globals like this as it can hamper unit tests. A unit test should be able to instantiate the exported struct, as the user can, and do it each time for every test.
Also depending on what kind of buffer you need, bytes.Buffer may be useful as it already provides io.Reader and io.Writer functions. bytes.Buffer also automatically grows and shrinks its buffer. In buffer.go you'll see various calls to b.Truncate(0) that does the shrinking with the comment "reset to recover space".
It's generally really really bad form to write Go code that is not thread-safe. If two different goroutines call functions that modify the buffer at the same time, who knows what state the buffer will be in when they finish? Just let the user provide a scratch-space buffer if they decide that the allocation performance is a bottleneck.

Working with large arrays - OutOfRam

I have an algorithm where I create two bi-dimensional arrays like this:
TYPE
TPtrMatrixLine = array of byte;
TCurMatrixLine = array of integer;
TPtrMatrix = array of TPtrMatrixLine;
TCurMatrix = array of TCurMatrixLine;
function x
var
PtrsMX: TPtrMatrix;
CurMx : TCurMatrix;
begin
{ Try to allocate RAM }
SetLength(PtrsMX, RowNr+1, ColNr+1);
SetLength(CurMx , RowNr+1, ColNr+1);
for all rows do
for all cols do
FillMatrixWithData; <------- CPU intensive task. It could take up to 10-20 min
end;
The two matrices have always the same dimension.
Usually there are only 2000 lines and 2000 columns in the matrix but sometimes it can go as high as 25000x6000 so for both matrices I need something like 146.5 + 586.2 = 732.8MB of RAM.
The problem is that the two blocks need to be contiguous so in most cases, even if 500-600MB of free RAM doesn't seem much on a modern computer, I run out of RAM.
The algorithm fills the cells of the array with data based on the neighbors of that cell. The operations are just additions and subtractions.
The TCurMatrixLine is the one that takes a lot or RAM since it uses integers to store data. Unfortunately, values stored may have sign so I cannot use Word instead of integers. SmallInt is too small (my values are bigger than SmallInt, but smaller than Word). I hope that if there is any other way to implement this, it needs not to add a lot of overhead, since processing a matrix with so many lines/column already takes a lot of time. In other words I hope that decreasing memory requirements will not increase processing time.
Any idea how to decrease the memory requirements?
[I use Delphi 7]
Update
Somebody suggested that each row of my array should be an independent uni-dimensional array.
I create as many rows (arrays) as I need and store them in TList. Sound very good. Obviously there will be no problem allocation such small memory blocks. But I am afraid it will have a gigantic impact on speed. I use now
TCurMatrixLine = array of integer;
TCurMatrix = array of TCurMatrixLine;
because it is faster than TCurMatrix= array of array of integer (because of the way data is placed in memory). So, breaking the array in independent lines may affect the speed.
The suggestion of using a signed 2 byte integer will greatly aid you.
Another useful tactic is to mark your exe as being LARGE_ADDRESS_AWARE by adding {$SetPEFlags IMAGE_FILE_LARGE_ADDRESS_AWARE} to your .dpr file. This will only help if you are running on 64 bit Windows and will increase your address space from 2GB to 4GB.
It may not work on Delphi 7 (I seem to recall you are using D7) and you must be using FastMM since the old Borland memory manager isn't compatible with large address space. If $SetPEFlags isn't available you can still mark the exe with EDITBIN.
If you still encounter difficulties then yet another trick is to do allocate smaller sub-blocks of memory and use a wrapper class to handle mapping indices to the appropriate sub-block and offset within. You can use a default index property to make this transparent to the calling code.
Naturally a block allocated approach like this does incur some processing overhead but it's your best bet if you are having troubles with getting contiguous blocks.
If the absolute values of elements of CurMx fits word then you can store it in word and use another array of boolean for its sign. It reduces 1 byte for each element.
Have you considered to manually allocate the data structure on the heap?
...and measured how this will affect the memory usage and the performance?
Using the heap might actually increase speed and reduce the memory usage, because you can avoid the whole array to be copied from one memory segment to another memory segment. (Eg. if your FillMatrixWithData are declared with a non-const open array parameter).

Resources