How are hashtables (maps) stored in memory? - memory

This question is specifically for hashtables, but might also cover other data structures such as linked lists or trees.
For instance, if you have a struct as follows:
struct Data
{
int value1;
int value2;
int value3;
}
And each integer is 4-byte aligned and stored in memory sequentially, are the key and value of a hash table stored sequentially as well? If you consider the following:
std::map<int, string> list;
list[0] = "first";
Is that first element represented like this?
struct ListNode
{
int key;
string value;
}
And if the key and value are 4-byte aligned and stored sequentially, does it matter where the next pair is stored?
What about a node in a linked list?
Just trying to visualize this conceptually, and also see if the same guidelines for memory storage also apply for open-addressing hashing (the load is under 1) vs. chained hashing (load doesn't matter).

It's highly implementation-specific. And by that I am not only referring to the compiler, CPU architecture and ABI, but also the implementation of the hash table.
Some hash tables use a struct that contains a key and a value next to each other, much like you have guessed. Others have one array of keys and one array of values, so that values[i] is the associated value for the key at keys[i]. This is independent of the "open addressing vs. separate chaining" question.

A hash is a data structure itself. Here's your visualizing:
http://en.wikipedia.org/wiki/Hash_table
http://en.wikipedia.org/wiki/Hash_function
Using a hash function (langauge-specific), the keys are turned into places, and the values are placed there (in an array.)
Linked-lists i'm not as sure about, but i would be they are stored sequentially if they are created sequentially. Obviously, if what the nodes hold increases in size, they'd need to be moved and the pointer redefined to that point.

Usually when the value is not that big (int) it's best to group it together with the key (which by default shouldn't be too big), otherwise only a pointer to it is kept.

The simplest representation of a hash table is an array (the table).
A hash function generates a number between 0 and the size of the array. That number is the index for the item.
There is more to it than this, bit that's the general concept and explains why lookups are so fast.

Related

Lua table, if the key starts from 1000, will there be a performance loss?

a={}
a[1000]=1
I don't define other things. Will the storage space be occupied from 1 to 999?
I know that a[1]=nil or a[999]=nil, will it be calculated from 1 to 1000 in sequence during traversal?
No, there is not going to be space allocated for other (1-999) elements (unless you create 1000 elements and then delete 1-999). Lua supports "sparse" arrays and will use the hash part of the table to store those key/value pairs.
If you are asking whether a[1000] is going to be slower if 1-999 elements are not present, then it's possible, as in this case the hash part of the table is going to be used (instead of the array part), but you'll have to benchmark to see if there is any observable difference that matters in your case.

Swift 3 and Index of a custom linked list collection type

In Swift 3 Collection indices have to conform to Comparable instead of Equatable.
Full story can be read here swift-evolution/0065.
Here's a relevant quote:
Usually an index can be represented with one or two Ints that
efficiently encode the path to the element from the root of a data
structure. Since one is free to choose the encoding of the “path”, we
think it is possible to choose it in such a way that indices are
cheaply comparable. That has been the case for all of the indices
required to implement the standard library, and a few others we
investigated while researching this change.
In my implementation of a custom linked list collection a node (pointing to a successor) is the opaque index type. However, given two instances, it is not possible to tell if one precedes another without risking traversal of a significant part of the chain.
I'm curious, how would you implement Comparable for a linked list index with O(1) complexity?
The only idea that I currently have is to somehow count steps while advancing the index, storing it within the index type as a property and then comparing those values.
Serious downside of this solution is that indices must be invalidated when mutating the collection. And while that seems reasonable for arrays, I do not want to break that huge benefit linked lists have - they do not invalidate indices of unchanged nodes.
EDIT:
It can be done at the cost of two additional integers as collection properties assuming that single linked list implements front insert, front remove and back append. Any meddling around in the middle would anyway break O(1) complexity requirement.
Here's my take on it.
a) I introduced one private integer type property to my custom Index type: depth.
b) I introduced two private integer type properties to the collection: startDepth and endDepth, which both default to zero for an empty list.
Each front insert decrements the startDepth.
Each front remove increments the startDepth.
Each back append increments the endDepth.
Thus all indices startIndex..<endIndex have a reflecting integer range startDepth..<endDepth.
c) Whenever collection vends an index either by startIndex or endIndex it will inherit its corresponding depth value from the collection. When collection is asked to advance the index by invoking index(_ after:) I will simply initialize a new Index instance with incremented depth value (depth += 1).
Conforming to Comparable boils down to comparing left-hand side depth value to the right-hand side one.
Note that because I expand the integer range from both sides as well, all the depth values for the middle indices remain unchanged (thus are not invalidated).
Conclusion:
Traded benefit of O(1) index comparisons at the cost of minor increase in memory footprint and few integer increments and decrements. I expect index lifetime to be short and number of collections relatively small.
If anyone has a better solution I'd gladly take a look at it!
I may have another solution. If you use floats instead of integers, you can gain kind of O(1) insertion-in-the-middle performance if you set the sortIndex of the inserted node to a value between the predecessor and the successor's sortIndex. This would require to store (and update) the predecessor's sortIndex on your nodes (I imagine this should not be to hard since it is only changed on insertion or removal and it can always be propagated 'up').
In your index(after:) method you need to query the successor node, but since you use your node as index, that is be straightforward.
One caveat is the finite precision of floating points, so if on insertion you the distance between the two sort indices are two small, you need to reindex at least part of the list. Since you said you only expect small scale, I would just go through the hole list and use the position for that.
This approach has all the benefits of your own, with the added benefit of good performance on insertion in the middle.

What is the fastest mean to transfer a record in DCOM

I want to transfer some records with the following structure between two Windows PC computer using COM/DCOM. I prefer to transfer an array, say 100 members of TARec, at a time, not each record individually. Currently I am doing this using IStrings. I am looking to improve it using the raw records, to save the time to encode/decode the strings at both ends. Please share your experience.
type
TARec = record
A : TDateTime;
B : WORD;
C : Boolean;
D : Double;
end;
All the record's field type are OLE compatible. Many thanks in advance.
As Rudy suggests in the comments, if your data contains simple value types then a variant byte array can be a very efficient approach and quite simple to implement.
Since you have stated that your data already resides in an array, the basic approach would be:
Create a byte array of the required size to hold all your record data (use VarArrayCreate with type varByte)
Lock the array to obtain a pointer that is safe to use to reference the array contents in memory (VarArrayLock will lock and return a pointer to the array data)
Use CopyMemory to directly copy the data from your array of records to the byte array memory.
Unlock the variant array (VarArrayUnlock) and pass it through your COM/DCOM interface
On the other ('receiving') side you simply reverse the process:
Declare an array of records of the required size
Lock the variant byte array to obtain a pointer to the memory holding the bytes
Copy the byte array data into your record array
Unlock the byte array
This exact approach is something I have used very successfully in a very demanding COM/DCOM scenario (w.r.t efficiency/performance) in the past.
Things to be careful of:
If your data ever changes to include more complex types such as strings or dynamic arrays then additional work will be required to correctly transport these through a byte array.
If your data structure ever changes then the code on both sides of the interface will need to be updated accordingly. One way to protect against this is to incorporate some mechanism for the data to be identified as valid or not by the receiver. This could include a "version number" for example and/or a value (in a 'header' as part of the byte array, in addition to the array data, or passed as a separate parameter entirely - precise details don't really matter). If the receiver finds a version number or size that it is not expecting then it can report this gracefully rather than naively processing the data incorrectly and (most likely) crashing or throwing exceptions as a result.
Alignment/packing issues. Even with the same declaration for the record type, if code is compiled with different alignment settings then the size required for each record in memory could change (which is why a "version number" for the data structure format might not be reliable on its own). One way to avoid this would be to declare the record as packed, though this comes at the cost of a slight reduction in efficiency (and still relies on both sides of the interface agreeing that the data structure is packed).
There are just things to bear in mind however, not prescriptive. Just how complex/robust your implementation needs to be will be determined by your specific case.

How can I avoid running out of memory with a growing TDictionary?

TDictionary<TKey,TValue> uses an internal array that is doubled if it is full:
newCap := Length(FItems) * 2;
if newCap = 0 then
newCap := 4;
Rehash(newCap);
This performs well with medium number of items, but if one gets to the upper limit it is very unfortunate, because it might throw an EOutOfMemory exception even if there is almost half of the memory still available.
Is there any way to influence this behaviour? How do other collection classes deal with this scenario?
You need to understand how a Dictionary works. A dictionary contains a list of "hash buckets" where the items you insert are placed. That's a finite number, so once you fill it up you need to allocate more buckets, there's no way around it. Since the assignment of objects-to-buckets is based on the result of a hash function, you can't simply add buckets to the end of the array and put stuff in there, you need to re-allocate the whole list of blocks, re-hash everything and put it in the (new) corresponding buckets.
Given this behavior, the only way to make the dictionary not re-allocate once full is to make sure it never gets full. If you know the number of items you'll insert in the dictionary pass it as a parameter to the constructor and you'll be done, no more dictionary reallocations.
If you can't do that (you don't know the number of items you'll have in the dictionary) you'll need to reconsider what made you select the TDictionary in the first place and select a data structure that offers better compromise for your particular algorithm. For example you could use binary search trees, as they do the balancing by rotating information in existing nodes, no need for re-allocations ever.

Understanding memory allocation for TList<RecordType>

I have to store a TList of something that can easily be implemented as a record in Delphi (five simple fields). However, it's not clear to me what happens when I do TList<TMyRecordType>.Add(R).
Since R is a local variable in the procedure in which I create the my TList, I assume that the memory for it will be released when the function returns. Does this leave an invalid record pointer in the list? Or does the list know to copy-on-assign? If the former, I assume I would have to manually manager the memory for R with New() and Dispose(), is that correct?
Alternatively, I can "promote" my record type to a class type by simply declaring the fields public (without even bothering with making them formal properties). Is that considered OK, or ought I to take the time to build out the class with private fields and public properties?
Simplified: records are blobs of data and are passed around by value - i.e. by copying them - by default. TList<T> stores values in an array of type T. So, TList<TMyRecordType>.Add(R) will copy the value R into the array at position Count, and increment the Count by one. No need to worry about allocation or deallocation of memory.
More complex issues that you usually don't need to worry about: if your record contains fields of a string type, an interface type, a dynamic array, or a record which itself contains fields of one of these types, then it's not just a simply copy of data; instead, CopyRecord from System.pas is used, which ensures that reference counts are updated correctly. But usually you don't need to worry about this detail unless you are using Move to shift the bits around yourself, or doing similar low-level operations.

Resources