I know that, for a list, we have to traverse the entire list and then determine the size of it?
What is the complexity to determine the size of a binary in Erlang?
byte_size/1 (the command to measure binary stuff) executes in constant time irrelevant to the size of the binary, while length of a list is proportional to the size of the list.
See 3 Common Caveats for reference
Time and memory complexity of erlang:size/1, erlang:tuple_size/1, erlang:bit_size/1 and erlang:byte_size/1 is O(1). (And erlang:map_size/1 as well.) Why do you even think it could be anything else? It doesn't make any sense.
Related
I have just started learning rust, and it is my first proper look into a low level language (I do python, usually). Part of the tutorial explains that a string literal is stored on the stack, because it has a fixed (and known) size; it also explains that a non-initialised string is stored on the heap, so that its size can grow as necessary.
My understanding is that the stack is much faster than the heap. In the case of a string whose size is unknown, but I know it will not ever require more than n bytes, does it make sense to allocate space on the stack for the maximum size, instead of sticking it on the heap?
The purpose of this question is not to solve a problem, but to help me understand, so I would appreciate verbose and detailed answers!
The difference in performance between the stack and the heap comes due to the fact that objects in the heap may change size at run time, and they must then be reallocated somewhere else in the heap.
Now for the verbose part. Imagine you have an integer i32. This number will always be the same size, so any modifications made to it will occur in place. When it goes out of scope (it stops being needed in the program) it will either be deleted, or, a more efficient solution, it will be deleted along with the whole stack it belongs to.
Now you want to create a String. So you create it in the heap and give it a value. And then you modify it and add some characters to it. Now two things can happen.
There is free memory after the string, so the allocator uses this memory to write the new part.
There already is an object allocated in the memory right after the string, and of course, you don't want to overwrite it. So the allocator looks for the next free memory space with enough size to hold the new string and copies into it that string. Then deletes the old one, freeing that memory.
As you can see in the heap the number of operations to be made is incredibly higher than in the stack, so its performance will be lower.
Now, in your case, there are some methods specifically for memory reservation. String::reserve() and String::reserve_exact(). I would recommend you to look at the documentation for Rust always. Usually there already is a std method for what you want.
I am trying to build a Sieve of Eratosthenes in Lua and i tried several things but i see myself confronted with the following problem:
The tables of Lua are to small for this scenario. If I just want to create a table with all numbers (see example below), the table is too "small" even with only 1/8 (...) of the number (the number is pretty big I admit)...
max = 600851475143
numbers = {}
for i=1, max do
table.insert(numbers, i)
end
If I execute this script on my Windows machine there is an error message saying: C:\Program Files (x86)\Lua\5.1\lua.exe: not enough memory. With Lua 5.3 running on my Linux machine I tried that too, error was just killed. So it is pretty obvious that lua can´t handle the amount of entries.
I don´t really know whether it is just impossible to store that amount of entries in a lua table or there is a simple solution for this (tried it by using a long string aswell...)? And what exactly is the largest amount of entries in a Lua table?
Update: And would it be possible to manually allocate somehow more memory for the table?
Update 2 (Solution for second question): The second question is an easy one, I just tested it by running every number until the program breaks: 33.554.432 (2^25) entries fit in one one-dimensional table on my 12 GB RAM system. Why 2^25? Because 64 Bit per number * 2^25 = 2147483648 Bits which are exactly 2 GB. This seems to be the standard memory allocation size for the Lua for Windows 32 Bit compiler.
P.S. You may have noticed that this number is from the Euler Project Problem 3. Yes I am trying to accomplish that. Please don´t give specific hints (..). Thank you :)
The Sieve of Eratosthenes only requires one bit per number, representing whether the number has been marked non-prime or not.
One way to reduce memory usage would be to use bitwise math to represent multiple bits in each table entry. Current Lua implementations have intrinsic support for bitwise-or, -and etc. Depending on the underlying implementation, you should be able to represent 32 or 64 bits (number flags) per table entry.
Another option would be to use one or more very long strings instead of a table. You only need a linear array, which is really what a string is. Just have a long string with "t" or "f", or "0" or "1", at every position.
Caveat: String manipulation in Lua always involves duplication, which rapidly turns into n² or worse complexity in terms of performance. You wouldn't want one continuous string for the whole massive sequence, but you could probably break it up into blocks of a thousand, or of some power of 2. That would reduce your memory usage to 1 byte per number while minimizing the overhead.
Edit: After noticing a point made elsewhere, I realized your maximum number is so large that, even with a bit per number, your memory requirements would optimally be about 73 gigabytes, which is extremely impractical. I would recommend following the advice Piglet gave in their answer, to look at Jon Sorenson's version of the sieve, which works on segments of the space instead of the whole thing.
I'll leave my suggestion, as it still might be useful for Sorenson's sieve, but yeah, you have a bigger problem than you realize.
Lua uses double precision floats to represent numbers. That's 64bits per number.
600851475143 numbers result in almost 4.5 Terabytes of memory.
So it's not Lua's or its tables' fault. The error message even says
not enough memory
You just don't have enough RAM to allocate that much.
If you would have read the linked Wikipedia article carefully you would have found the following section:
As Sorenson notes, the problem with the sieve of Eratosthenes is not
the number of operations it performs but rather its memory
requirements.[8] For large n, the range of primes may not fit in
memory; worse, even for moderate n, its cache use is highly
suboptimal. The algorithm walks through the entire array A, exhibiting
almost no locality of reference.
A solution to these problems is offered by segmented sieves, where
only portions of the range are sieved at a time.[9] These have been
known since the 1970s, and work as follows
...
I presume the technique detecting shared expressions is applied on most of modern SMT solvers. The performance should be very good when it processes a sequence of similar expressions. However, I got unexpected results after I run Z3 on input1 and input2. Instead of build a long constraint A in "input1", some intermediate variables are defined to map to the sub-expressions of A in "input2". In that case, input1 has less variables, which should be solved faster than input2. I cannot find useful information from the statistic as they are exactly same except the solving time and memory consumed:
I would very much appreciate if someone can answer/explain what affects the performance of the SMT solvers more, the number of variables or number of subexpressions?
I've done some profiling, and it seems that both inputs behave exactly the same in the solver. All (check-sat) commands take exactly the same time. Note that input 2 is a file of size 255KB, but input1 is a file of size 240MB, i.e., this file is about 1000 times larger than the first one. According to my profiler, all of the additional time required to solve these queries is spent in the parser. So, it simply takes a long time to read and check the input; the actual queries are all easy.
I was wondering about the parameters of two APIs of epoll.
epoll_create (int size) - in this API, size is defined as the size of event pool. But, it seems that having more events than the size still works. (I've put the size as 2 and forced event pool to have 3 events... but it still works !?) Thus I was wondering what this parameter actually means and curious about the maximum value of this parameter.
epoll_wait (int maxevents) - for this API, the maxevents definition is straight-forward. However, I can see the lackness of information or advices on how to determin this parameter. I expect this parameter to be changed depending on the size of epoll event pool size. Any suggestions or advices will be great. Thank you!
1.
"man epoll_create"
DESCRIPTION
...
The size is not the maximum size of the backing store but just a hint
to the kernel about how to dimension internal structures. (Nowadays,
size is unused; see NOTES below.)
NOTES
Since Linux 2.6.8, the size argument is unused, but must be greater
than zero. (The kernel dynamically sizes the required data struc‐
tures without needing this initial hint.)
2.
Just determine an accurate number by yourself, but be aware that
giving it a small number may drop out the efficiency a little bit.
Because the smaller number assigned to "maxevent" , the more often you may have to call epoll_wait() to consume all the events, queued already on the epoll.
There is a question I have been wondering about for ages and I was hoping someone could give me an answer to rest my mind.
Let's assume that I have an input stream (like a file/socket/pipe) and want to parse the incoming data. Let's assume that each block of incoming data is split by a newline, like most common internet protocols. This application could just as well be parsing html, xml or any other smart data structure. The point is that the data is split into logical blocks by a delimiter rather than a fixed length. How can I buffer the data to wait for the delimiter to appear?
The answer seems simple enough: just have a large enough byte/char array to fit the entire thing.
But what if the delimiter comes after the buffer is full? This is actually a question about how to fit a dynamic block of data in a fixed size block. I can only really think of a few alternatives:
Increase the buffer size when needed. This may require heavy memory reallocation, and may lead to resource exhaustion from specially crafted streams (or perhaps even denial of service in the case of sockets where we want to protect ourselves against exhaustion attacks and drop connections that try to exhaust resources...and an attacker starts sending fake, oversized, packets to trigger the protection).
Start overwriting old data by using a circular buffer. Perhaps not an ideal method since the logical block would become incomplete.
Dump new data when the buffer is full. However, this way the delimiter will never be found, so this choice is obviously not a good option.
Just make the fixed size buffer damn large and assume all incoming logical data blocks is within its bounds...and if it ever fills, just interpret the full buffer as a logical block...
In either case I feel we must assume that the logical blocks will never exceed a certain size...
Any thoughts on this topic? Obviously there must be a way since the higher level languages offer some sort of buffering mechanisms with their readLine() stream methods.
Is there any "best way" to solve this or is there always a tradeoff? I really appreciate all thoughts and ideas on this topic since this question has been haunting me everytime I have needed to write a parser of some sort.
There are normally two techniques for this
1) What I think readline uses - if the buffer fills return the data without the delimiter on the end
2) When the buffer fills, remeber it filled, keep reading until you get the delimiter and report an error (or truncate the record at the buffer size)
Options (2) and (3) are out as you are losing data in both cases. Option (4) of a huge fixed size buffer would not solve the problem as it is just not possible to know what size is large enough? Is it all the physical memory + swap space + the free space available in all disks everywhere in the known universe?
Resizing the buffer looks like the best solution. Say realloc to twice the size and continue writing. There is always a chance of a specially constructed stream like a DoS that tries to bring down the system. My first thought was so set an arbitrarily large size as the max_size for the buffer. However, if we could do that, we could just set that as the size of the large buffer. So, resizing the buffer looks like the best option to me.
If the protocol or you do not define a upper bound for the length of each block then I don't see how you can prevent memory exhaustion edge cases.
Assuming that there is an upper bound using a fixed size block seems like a good approach for reasonably sized limits.
If the limits are high enough that a single fixed buffer will be inefficient then I would suggest using a data structure that is implemented internally as a linked list of fixed size buffers.
Why do you have to wait to start processing?
Generally alternative 4 is sound. It doesn't however, require an "assumption", rather a definition. You simply declare that blocks are smaller than 8K and be done with it. It's not difficult to do.
Further, there's alternative 5: Start processing partial buffers. This works unless you have designed a truly pathological protocol that sends critical data at the very end of the block.
HTML, XML, JSON/YAML, etc., can all be parsed incrementally. You don't require a delimeter to do useful processing.