What is the optimal trade off between refactoring and increasing the call stack? - callstack

I'm looking at refactoring a lot of large (1000+ lines) methods into nice chunks that can then be unit tested as appropriate.
This started me thinking about the call stack, as many of my rafactored blocks have other refactored blocks within them, and my large methods may well have been called by other large methods.
I'd like to open this for discussion to see if refactoring can lead to call stack issues. I doubt it will in most cases, but wondered about refactored recursive methods and whether it would be possible to cause a stack overflow without creating an infinite loop?

Excluding recursion, I wouldn't worry about call stack issues until they appear (which they likely won't).
Regarding recursion: it must be carefully implemented and carefully tested no matter how it's done so this would be no different.

I guess it's technically possible. But not something that I would worry about unless it actually happens when I test my code.

When I was a kid, and computers had 64K of RAM, the call stack size mattered.
Nowadays, it's hardly worth discussing. Memory is huge, stack frames are small, a few extra function calls are hardly measurable.
As an example, Python has an artificially small call stack so it detects infinite recursion promptly. The default size is 1000 frames, but this is adjustable with a simple API call.
The only way to run afoul of the stack in Python is to tackle Project Euler problems without thinking. Even then, you typically run out of time before you run out of stack. (100 trillion loops would take far longer than a human lifespan.)

I think it's highly unlikely for you to get a stackoverflow without recursion when refactoring. The only way that I can see that this would happen is if you are allocating and/or passing a lot of data between methods on the stack itself.

Related

Callstack sampling in Erlang

I am currently investigating a performance issue within a large Erlang application. The application exhibits larger-than-expected CPU load. To get a first grasp which parts of the system are responsible for the load, I'd like to perform callstack sampling as described in this answer.
Is there a better way to do this than calling erlang:process_info(Pid, backtrace) repeatedly and grepping the functions from that output?
Note that the system is too large to use fprof, and that etop did not point me into the right direction as well. Using fprof for only parts of the system is not possible right now as well, as I first need to pin-point the general location of the performance issue.
A simple way to get the actual size of the stack is process_info(Pid, stack_size). While this only return the size of the stack in words it is a very simple and efficient way of seeing which processes have large stacks.

Who first come up with the idea of the call stack?

It's simple yet fast and effective because of the locality property.
You also manage the memory, a finite resource, by adjusting just one pointer.
I think it's a brilliant idea.
Who first come up with the idea of the call stack?
Since when does computers have supporting instructions for stack?
Is there any historically significant paper on it?
As far as I know, computers have used a call stack since its earliest days.
Stacks themselves were first proposed by Alan Turing, dating back to 1946. I believe that stacks were first used as a theoretical concept to define pushdown automatons.
The first article about call stacks I could find was written by Dijkstra on Numerische Mathematik journal, titled "Recursive Programming" (http://link.springer.com/article/10.1007%2FBF01386232).
Also note that the call stack is there mainly because of recursion. It may be difficult to actually know who really got the idea for the call stack in the first place, since it is pretty intuitive that a stack is needed if you want to support recursion. Consider this quote from Expert C Programming - Deep C Secrets, by Peter Van Der Linden:
A stack would not be needed except for recursive calls. If not for
these, a fixed amount of space for local variables, parameters, and
return addresses would be known at compile time and could be allocated
in the BSS. [...] Allowing recursive calls means that we must find a
way to permit multiple instances of local variables to be in existence
at one time, though only the most recently created will be accessed -
the classic specification of a stack.
This is from chapter 6, page 143 / 144 - if you like this kind of stuff, I highly recommend you to read it.
It is easy to understand that a stack is probably the right structure to use when one wants to keep track of the call chain, since function calls on hold will return in a LIFO manner.

tracking memory allocation in Clojure

In my program all the state is held in a giant map in an atom, which is updated by a load of pure functions in each iteration. I have determined that the heap size is increasing, how do I find the code that's responsible ? I tried VisualVM, but it gives generic information and I can't find which part of my state is growing and which function is causing it to grow.
Look for common gotchas like forgetting to use with-open, hanging onto the head of a sequence, etc.
Isolate smaller segments of your code and see if you still see the same kinds of memory growth using JVisualVM. If knocking out or mocking some piece makes no difference then put it back, and if it makes a difference then you can focus on that and figure out what is going on.
I don't know of any silver bullet tool or technique, it's just a process of divide and conquer, and thinking about what you are doing in your code.

Interview question on stack

recently my friend attended intv, he faced this question(intviewer made this up from my fren's answer to another question)
Say, we have option to use either
1) recursion --> uses system stack, i think OS takes care of everything
2) use our own stack for only data part and get things done.
to fix something. Which one do you prefer? and why?
assume stack size wouldn't grow beyond 100.
I would use the system stack. Why re-invent the wheel?
Function calls, while not really slow per se, do take non-zero time. Therefore an iterative solution can be slightly faster.
More often thatn not, simplicity is better than a slight performance gain.
Dont overkill a solution, and loose maitainability/readability for 1ms if you are not going to use that 1ms.
Just remember that whatever clever little hack you put together has to be maintained (and proven to work first for that matter) where as many standard/system solutions are available, that has been proven. (see Reinventing the wheel).
If it is really system crytical that you reduce memory allocation and enhance performance, you have your work cut out for you, and be prepared to spend some time proving that your solution is better/faster and stable.
Interesting to see the general preference for recursion on here, and a few who assume that the recursive implementation will necessarily be clearer or more maintainable... maybe, maybe not :-).
recursion typically avoids an explicit loop
recursion can sometimes simply use local variables inside the function to avoid a container storing results as they're calculated
recursion can make it trivial to reverse the order in which sub-results are gathered
recursion means there's a limit to the depth of information being processed, where-as often a loop implementation easily avoids this, or at least has memory requirements that more accurately reflect the data-processing needs
the more widely applicable you want your software to be, the more important it is to remove arbitrary limits (e.g. UNIX software like modern vim, less, GNU grep etc. make minimal assumptions about file/line/expression length and dynamically attempt whatever they're asked / many here will remember old editors and vendor-specific utilities e.g. one "celestial" company's grep that would never match results at the end of a too-long line, editors that SIGSEGVed, shutdown, corrupted or slowed down into uselessness on long lines or files)
naive recursion can result in spectacularly inefficiently combined sub-results
some people find recursion easier to understand, some find it harder - definitely it suits how we think about some problems better than others
Depends on the algorithm. Small stack usage, system stack. Lot of stack needed, go on the heap. Stack size is limited by OS beyond which OS throws stackoverflow ;-) If algo uses more stack space then I would go with stack data structure and push the data on the heap
Hm, I think it deppends the problem...
The stack size, if I got your point, is not only what limits you from using one or another.
But wanting to use recursion... well, no bads, really, for the length of the stack, but I'd rather make my own solution.
Avoid recursion when you can. :)
Recursion may be the simplest way to solve a particular problem. An iterative solution can required more code and more opportunities for errors. The testing and maintenance cost may be greater than the performance benefit.
I would go with the first, use the system stack. That being said the language FORTH there are two system stacks. One is the return stack and the other is the parameters stack. This offers some nice flexibility.

Memory efficiency vs Processor efficiency

In general use, should I bet on memory efficiency or processor efficiency?
In the end, I know that must be according to software/hardware specs. but I think there's a general rule when there's no boundaries.
Example 01 (memory efficiency):
int n=0;
if(n < getRndNumber())
n = getRndNumber();
Example 02 (processor efficiency):
int n=0, aux=0;
aux = getRndNumber();
if(n < aux)
n = aux;
They're just simple examples and wrote them in order to show what I mean. Better examples will be well received.
Thanks in advance.
I'm going to wheel out the universal performance question trump card and say "neither, bet on correctness".
Write your code in the clearest possible way, set specific measurable performance goals, measure the performance of your software, profile it to find the bottlenecks, and then if necessary optimise knowing whether processor or memory is your problem.
(As if to make a case in point, your 'simple examples' have different behaviour assuming getRndNumber() does not return a constant value. If you'd written it in the simplest way, something like n = max(0, getRndNumber()) then it may be less efficient but it would be more readable and more likely to be correct.)
Edit:
To answer Dervin's criticism below, I should probably state why I believe there is no general answer to this question.
A good example is taking a random sample from a sequence. For sequences small enough to be copied into another contiguous memory block, a partial Fisher-Yates shuffle which favours computational efficiency is the fastest approach. However, for very large sequences where insufficient memory is available to allocate, something like reservoir sampling that favours memory efficiency must be used; this will be an order of magnitude slower.
So what is the general case here? For sampling a sequence should you favour CPU or memory efficiency? You simply cannot tell without knowing things like the average and maximum sizes of the sequences, the amount of physical and virtual memory in the machine, the likely number of concurrent samples being taken, the CPU and memory requirements of the other code running on the machine, and even things like whether the application itself needs to favour speed or reliability. And even if you do know all that, then you're still only guessing, you don't really know which one to favour.
Therefore the only reasonable thing to do is implement the code in a manner favouring clarity and maintainability (taking factors you know into account, and assuming that clarity is not at the expense of gross inefficiency), measure it in a real-life situation to see whether it is causing a problem and what the problem is, and then if so alter it. Most of the time you will not have to change the code as it will not be a bottleneck. The net result of this approach is that you will have a clear and maintainable codebase overall, with the small parts that particularly need to be CPU and/or memory efficient optimised to be so.
You think one is unrelated to the other? Why do you think that? Here are two examples where you'll find often unconsidered bottlenecks.
Example 1
You design a DB related software system and find that I/O is slowing you down as you read in one of the tables. Instead of allowing multiple queries resulting in multiple I/O operations you ingest the entire table first. Now all rows of the table are in memory and the only limitation should be the CPU. Patting yourself on the back you wonder why your program becomes hideously slow on memory poor computers. Oh dear, you've forgotten about virtual memory, swapping, and such.
Example 2
You write a program where your methods create many small objects but posses O(1), O(log) or at the worst O(n) speed. You've optimized for speed but see that your application takes a long time to run. Curiously, you profile to discover what the culprit could be. To your chagrin you discover that all those small objects adds up fast. Your code is being held back by the GC.
You have to decide based on the particular application, usage etc. In your above example, both memory and processor usage is trivial, so not a good example.
A better example might be the use of history tables in chess search. This method caches previously searched positions in the game tree in case they are re-searched in other branches of the game tree or on the next move.
However, it does cost space to store them, and space also requires time. If you use up too much memory you might end up using virtual memory which will be slow.
Another example might be caching in a database server. Clearly it is faster to access a cached result from main memory, but then again it would not be a good idea to keep loading and freeing from memory data that is unlikely to be re-used.
In other words, you can't generalize. You can't even make a decision based on the code - sometimes the decision has to be made in the context of likely data and usage patterns.
In the past 10 years. main memory has increased in speed hardly at all, while processors have continued to race ahead. There is no reason to believe this is going to change.
Edit: Incidently, in your example, aux will most likely end up in a register and never make it to memory at all.
Without context I think optimising for anything other than readability and flexibilty
So, the only general rule I could agree with is "Optmise for readability, while bearing in mind the possibility that at some point in the future you may have to optimise for either memory or processor efficiency in the future".
Sorry it isn't quite as catchy as you would like...
In your example, version 2 is clearly better, even though version 1 is prettier to me, since as others have pointed out, calling getRndNumber() multiple times requires more knowledge of getRndNumber() to follow.
It's also worth considering the scope of the operation you are looking to optimize; if the operation is time sensitive, say part of a web request or GUI update, it might be better to err on the side of completing it faster than saving memory.
Processor efficiency. Memory is egregiously slow compared to your processor. See this link for more details.
Although, in your example, the two would likely be optimized to be equivalent by the compiler.

Resources