Assume we have very big NSDictionary, when we want to call the objectForKey method, will it make lots of operations in core to get value? Or will it point to value in the memory directly?
How does it works in core?
The CFDictionary section of the Collections Programming Topics for Core Foundation (which you should look into if you want to know more) states:
A dictionary—an object of the CFDictionary type—is a hashing-based
collection whose keys for accessing its values are arbitrary,
program-defined pieces of data (or pointers to data). Although the key
is usually a string (or, in Core Foundation, a CFString object), it
can be anything that can fit into the size of a pointer—an integer, a
reference to a Core Foundation object, even a pointer to a data
structure (unlikely as that might be).
This is what wikipedia has to say about hash tables:
Ideally, the hash function should map each possible key to a unique
slot index, but this ideal is rarely achievable in practice (unless
the hash keys are fixed; i.e. new entries are never added to the table
after it is created). Instead, most hash table designs assume that
hash collisions—different keys that map to the same hash value—will
occur and must be accommodated in some way. In a well-dimensioned hash
table, the average cost (number of instructions) for each lookup is
independent of the number of elements stored in the table. Many hash
table designs also allow arbitrary insertions and deletions of
key-value pairs, at constant average (indeed, amortized) cost per
operation.
The performance therefore depends on the quality of the hash. If it is good then accessing elements should be an O(1) operation (i.e. not dependent on the number of elements).
EDIT:
In fact after reading further the Collections Programming Topics for Core Foundation, apple gives an answer to your question:
The access time for a value in a CFDictionary object is guaranteed to
be at worst O(log N) for any implementation, but is often O(1)
(constant time). Insertion or deletion operations are typically in
constant time as well, but are O(N*log N) in the worst cases. It is
faster to access values through a key than accessing them directly.
Dictionaries tend to use significantly more memory than an array with
the same number of values.
NSDictionary is essentially an Hash Table structure, thus Big-O for lookup is O(1). However, to avoid reallocations (and to achieve the O(1)) complexity you should use dictionaryWithCapacity: to create a new Dictionary with appropriate size with respect to the size of your dataset.
Related
Regarding iOS Swift -
Which is heavier / more expensive to initialize - an Array or Dictionary?
This small detail matters when you're dealing w/ large data sets & in very large corporations that may have millions of items in an array, dictionary or some other data structure that may be based off these.
-
TEST RESULTS:
- The test results above show how much time it took to initialize 1,000,000 empty Arrays, Dictionaries, & I decided to throw Set in there too.
-
ANSWER: ARRAY'S LIGHTER THAN SET & DICTIONARY.
-
BELOW IS THE REMAINDER OF THE DESCRIPTION THAT WAS WRITTEN WHEN I ORIGINALLY WROTE THIS QUESTION:
In Java, hash map is built on top of their array.
Apple says the Dictionary is "a type" of hash table and
"similar data types are known as hashes or associated arrays."
Apple clearly says a dictionary is not a hash map / hash table, nor an associated array. It is a type of those & similar.
A "a type of" doesn't mean it's some revolutionary new standard that is completely different from the other similar types, but Apple is clear that they are not the same. It may differ in how they choose to calculate the hash, how they store elements that collide at the same array index, etc.
https://developer.apple.com/documentation/swift/dictionary?fbclid=IwAR30CezlfvqpRdqjn5cnJlQmUc5Ys70GwJX7mYOKgHyDcd_kKuURgdoYnCY
I put everything in a measure block.
I have matrix in C with size m x n. Size isn't known. I must to have operations on matrix such as : delete first element and find i-th element. (where size woudn't be too big , from 10 to 50 columns of matrix). What is more efficient to use, linked list or hash table? How can I map column of matrix to one element of linked list or hash table depens what I choose to use?
Thanks
Linked lists don't provide very good random access, so from that perspective, you might not want to look in to using them to represent a matrix, since your lookup time will take a hit for each element you attempt to find.
Hashtables are very good for looking up elements as they can provide near constant time lookup for any given key, assuming the hash function is decent (using well established hashtable implementations would be wise)
Provided with the constraints that you have given though, a hashtable of linked lists might be a suitable solution, though it would still present you with the problem of finding the ith element, as you'd still need to iterate through each linked list to find the element you want. This would give you O(1) lookup for the row, but O(n) for the column, where n is the column count.
Furthermore, this is difficult because you'd have to make sure EVERY list in your hashtable is updated with the appropriate number of nodes as the number of columns grows/shrinks, so you're not buying yourself much in terms of space complexity.
A 2D array is probably best suited for representing a matrix, where you provide some capability of allowing the matrix to grow by efficiently managing memory allocation and copying.
An alternate method would be to look at something like the std::vector in lieu of the linked list, which acts like an array in that it's contiguous in memory, but will allow you the flexibility of dynamically growing in size.
if its for work then use hash table, avg runtime would be O(1).
for deletion/get/set given indices at O(1) 2d arr would be optimal.
I've been doing some reading about hash tables, dictionaries, etc. All literature and videos that I have watched imply to hash-tables as having the space/time trade-off property.
I am struggling to understand why a hash table takes up more space than, say, an array or a list with the same number of total elements (values)? Does it have something to do with actually storing the hashed keys?
As far as I understand and in basic terms, a hash table takes a key identifier (say some string), passes it through some hashing function, which spits out an index to an array or some other data-structure. Apart from the obvious memory usage to store your objects (values) in the array or table, why does a hash table use up more space? I feel like I am missing something obvious...
Like you say, it's all about the trade-off between lookup time and space. The larger the number of spaces (buckets) the underlying data structure has, the greater the number of locations the hash function has where it can potentially store each item, and so the chance of a collision (and therefore worse than constant-time performance) is reduced. However, having more buckets obviously means more space is required. The ratio of number of items to number of buckets is known as the load factor, and is explained in more detail in this question: What is the significance of load factor in HashMap?
In the case of a minimal perfect hash function, you can achieve O(1) performance storing n items in n buckets (a load factor of 1).
As you mentioned, the underlying structure of hash table is an array, which is the most basic type in the data structure world.
In order to make hash table fast, which support O(1) operations. The underlying array's capacity must be more than enough. It uses the term of load factor to evaluate this point. The load factor is the ratio of numbers of element in hash table to numbers of all the cells in the hash table. It evaluates how empty the hash table is.
To make the hash table run fast, the load factor can't be greater than some threshold value. For example, in the Quadratic Probing collision resolution method, the load factor should not be greater than 0.5. When the load factor approaches 0.5 while inserting new element into hash table, we need to rehashing the table to meet the requirement.
So hash table's high performance in the run time aspect is based on more space usage. This is time and space tradeoff.
Having returned to development after an absence of over a decade I am getting myself up to speed with the latest technologies for web development. Reading this post I see that I already understood the difference between hashes and arrays.
However, doesn't this mean that arrays are just a type of hash that uses a numerical key? As there is no reason to believe that an implementation of an array will automatically maintain the sequential nature of the array indices (when you delete or insert items for example), is there any greater difference than the inherent ordering of an array?
I mean, to step through an array, you need to set up a loop through the indices, the same as looping through the keys of a hash, and then you could order the numerical hash key set to behave the same (i.e. access the items from 1 to the last number that is a key in the hash in numerical sequence). To access an array element, you use the indices of the value you want, the same as giving the numerical key from the hash.
I came to this question while learning about arrays and hashes in Ruby on Rails, but it is a general question.
A Hash is essentially an array. Hash keys have some type of conversion function to translate an object of another type (or an integer set of other values) into the integer index of an array. Conversely, an array is a Hash that does not translate the keys into a separate type or value of index. However, calling an array a Hash implies an extra layer of functionality that does not exist, since there is no key conversion.
By definition, the objects of an array are stored in consecutive locations in memory, accessible by index.
Even when either type of data structure can be used, one benefit of using hashes with Integer keys is that a larger spread of integers can be stored in a smaller number of buckets. E.g., if your number keys are 5 integers near 1, 10, 100, 1000 and 10000 you don't need 10K buckets to have a hash of these 5 elements, but you do need that many if you use a straight-up array. Hash functions tend to be recalculated and more memory reallocated as the hash grows and the benefit of using an array is that its size can be more easily controlled and can remain fixed.
Here is how declarative programming defines it.
Difference between Declarative and Procedural Programming?
http://en.wikipedia.org/wiki/Procedural_programming
http://en.wikipedia.org/wiki/Declarative_programming
There are primitive, composite, and abstract data structures.
- An Array is composite.
- A Hash is abstract.
We have both because they are fundamentally different.
For example you can't pop/push primitives into a Hash like you can with an Array, because a Hash uses unordered key-values while an Array has an index.
http://en.m.wikipedia.org/wiki/List_of_data_structures
Is the best way to sort an array in Delphi is "alphanumeric".
I found this comment in an old code of my application
" The elements of this array must be in ascending, alphanumeric
sort order."
If so ,what copuld be the reason?
-Vas
There's no "best" way as to how to sort the elements of an array (or any collection for that fact). Sort is a humanized characteristic (things are not usually sorted) so I'm guessing the comment has more to do with what your program is expecting.
More concretely, there's probably other section of code elsewhere that expect the array elements to be sorted alphanumerically. It can be something so simple as displaying it into a TreeView already ordered so that the calling code doesn't have to sort the array first.
Arrays are represented as a contiguous memory assignment so that access is fast. Internally the compiler just does a call to GetMem asking for SizeOf(Type) * array size. There's nothing in the way the elements are sorted that affects the performance or memory size of the arrays in general. It MUST be in the program logic.
Most often an array is sorted to provide faster search times. Given a list of length L, I can compare with the midpoint (L DIV 2) and quickly determine if I need to look at the greater half, or the lesser half, and recursively continue using this pattern until I either have nothing to divide by or have found my match. This is what is called a Binary search. If the list is NOT sorted, then this type of operation is not available and instead I must inspect every item in the list until I reach the end.
No, there is no "best way" of sorting. And that's one of the reasons why you have multiple sorting techniques out there.
With QuickSort, you even provide the comparison function where you determine what order you ultimately want.
Sorting an array in some way is useful when you're trying to do a binary search on the array. A binary search can be extremely fast, compared to other methods. But if the sort error is wrong, the search will be unable to find the record.
Other reasons to keep arrays sorted are almost always for cosmetic reasons, to decide how the array is sent to some output.
The best way to re-order an array depends of the length of the array and the type of data it contains. A QuickSort algorithm would give a fast result in most cases. Delphi uses it internally when you're working with string-lists and some other lists. Question is, do you really need to sort it? Does it really need to stay an array even?
But the best way to keep an array sorted is by keeping it sorted from the first element that you add to it! In general, I write a wrapper around my array types, which will take care of keeping the array ordered. The 'Add' method will search for the biggest value in the array that's less or equal to the value that I want to add. I then insert the new item right after that position. To me, that would be the best solution. (With big arrays you could use the binary search method again to find the location where you need to insert the new record. It's slower than appending records to the end but you never have to wonder if it's sorted or not, since it is...