Who first come up with the idea of the call stack? - callstack

It's simple yet fast and effective because of the locality property.
You also manage the memory, a finite resource, by adjusting just one pointer.
I think it's a brilliant idea.
Who first come up with the idea of the call stack?
Since when does computers have supporting instructions for stack?
Is there any historically significant paper on it?

As far as I know, computers have used a call stack since its earliest days.
Stacks themselves were first proposed by Alan Turing, dating back to 1946. I believe that stacks were first used as a theoretical concept to define pushdown automatons.
The first article about call stacks I could find was written by Dijkstra on Numerische Mathematik journal, titled "Recursive Programming" (http://link.springer.com/article/10.1007%2FBF01386232).
Also note that the call stack is there mainly because of recursion. It may be difficult to actually know who really got the idea for the call stack in the first place, since it is pretty intuitive that a stack is needed if you want to support recursion. Consider this quote from Expert C Programming - Deep C Secrets, by Peter Van Der Linden:
A stack would not be needed except for recursive calls. If not for
these, a fixed amount of space for local variables, parameters, and
return addresses would be known at compile time and could be allocated
in the BSS. [...] Allowing recursive calls means that we must find a
way to permit multiple instances of local variables to be in existence
at one time, though only the most recently created will be accessed -
the classic specification of a stack.
This is from chapter 6, page 143 / 144 - if you like this kind of stuff, I highly recommend you to read it.
It is easy to understand that a stack is probably the right structure to use when one wants to keep track of the call chain, since function calls on hold will return in a LIFO manner.

Related

Is subroutine executed without stack? Justify your answer with valid arguments

Is subroutine executed without stack? Justify your answer with valid arguments
This is more of a homework (or exam) question; particularly the arguments expected will be in the lectures and/or course material.
In practice, many languages do both, but in such a way that it's indistinguishable from always using the stack, because the stack is needed to handle recursion (and, these days, reentrancy), and executing a subroutine without using the stack is treated purely as an optimisation (often, "inlining").
A few very old languages (eg FORTRAN and COBOL) default to not supporting recursion (much less reentrancy), and therefore may or may not use the stack unless a subroutine is specifically marked as "recursive". Whether they use the stack or not for non-recursive subroutines is up to the compiler (and may differ from version to version or even from subroutine to subroutine).
How this maps to the expected answer on your homework (or exam) depends on how these aspects were covered in the lectures and/or course materials; different courses will emphasise different parts, particularly if they deal with a specific programming language (eg. Python vs C/C++/C# vs FORTRAN).

Interview question on stack

recently my friend attended intv, he faced this question(intviewer made this up from my fren's answer to another question)
Say, we have option to use either
1) recursion --> uses system stack, i think OS takes care of everything
2) use our own stack for only data part and get things done.
to fix something. Which one do you prefer? and why?
assume stack size wouldn't grow beyond 100.
I would use the system stack. Why re-invent the wheel?
Function calls, while not really slow per se, do take non-zero time. Therefore an iterative solution can be slightly faster.
More often thatn not, simplicity is better than a slight performance gain.
Dont overkill a solution, and loose maitainability/readability for 1ms if you are not going to use that 1ms.
Just remember that whatever clever little hack you put together has to be maintained (and proven to work first for that matter) where as many standard/system solutions are available, that has been proven. (see Reinventing the wheel).
If it is really system crytical that you reduce memory allocation and enhance performance, you have your work cut out for you, and be prepared to spend some time proving that your solution is better/faster and stable.
Interesting to see the general preference for recursion on here, and a few who assume that the recursive implementation will necessarily be clearer or more maintainable... maybe, maybe not :-).
recursion typically avoids an explicit loop
recursion can sometimes simply use local variables inside the function to avoid a container storing results as they're calculated
recursion can make it trivial to reverse the order in which sub-results are gathered
recursion means there's a limit to the depth of information being processed, where-as often a loop implementation easily avoids this, or at least has memory requirements that more accurately reflect the data-processing needs
the more widely applicable you want your software to be, the more important it is to remove arbitrary limits (e.g. UNIX software like modern vim, less, GNU grep etc. make minimal assumptions about file/line/expression length and dynamically attempt whatever they're asked / many here will remember old editors and vendor-specific utilities e.g. one "celestial" company's grep that would never match results at the end of a too-long line, editors that SIGSEGVed, shutdown, corrupted or slowed down into uselessness on long lines or files)
naive recursion can result in spectacularly inefficiently combined sub-results
some people find recursion easier to understand, some find it harder - definitely it suits how we think about some problems better than others
Depends on the algorithm. Small stack usage, system stack. Lot of stack needed, go on the heap. Stack size is limited by OS beyond which OS throws stackoverflow ;-) If algo uses more stack space then I would go with stack data structure and push the data on the heap
Hm, I think it deppends the problem...
The stack size, if I got your point, is not only what limits you from using one or another.
But wanting to use recursion... well, no bads, really, for the length of the stack, but I'd rather make my own solution.
Avoid recursion when you can. :)
Recursion may be the simplest way to solve a particular problem. An iterative solution can required more code and more opportunities for errors. The testing and maintenance cost may be greater than the performance benefit.
I would go with the first, use the system stack. That being said the language FORTH there are two system stacks. One is the return stack and the other is the parameters stack. This offers some nice flexibility.

more than one stack in microprocessor

can I use more than one stack in microprocessor?
and if I can,How can I progamming those?
Sure you can. Several CPU architectures have multiple stack pointers - even lowly 8-bit processors, such as the M6809. And even if the concept is not implemented in the CPU hardware, you can easily create multiple stacks in software. A stack pointer is basically simply an index register, so you could (for example) use the IX and IY registers of the Z80 to implement multiple stacks.
If your microprocesser has more than one hardware stack, then yes, you can. You would have to write assembler though, since no c/c++ implementation makes use of multiple stacks.
It would be easier to help if you could say exactly what architecture you're talking about.
As for the how of it. Generally there is a special register or memory location that is used to point to the stack. Using another stack is just as simple as setting this value. This is all processor and architecture dependent so it depends on the one you are using.
On some platforms, the stack used for return addresses is entirely separate from the one used for parameter passing. Indeed, on some platforms, C compilers don't permit recursion and don't use any stack for parameter passing. Frankly, I like such designs, since they minimize the likelihood of stack problems causing errant program behavior.

Erlang: What are the pros and cons of different methods for avoiding intermediate variables?

At one point while traveling the web, I came across a great page which contrasted the clarity and terseness of different methods of doing a sequence of operations without having to make a bunch of throwaway variables, e.g., Var1, Var2, Var3. It tried list comprehensions, folds, maps, etc. For some reason, now matter what I google, I can't find it again. Anyone have any idea what I'm talking about? Or want to explore the topic anyway?
Your question doesn't make much sense.
List comprehensions, fold, and map aren't for avoiding variables (nor are they interchangeable), they're the right ways to process data depending on what you're trying to do.
This is the article you were looking for:
http://erlanganswers.com/web/mcedemo/VersionedVariables.html
It is probably more of an art than a science. In a nutshell my advice is to lean away from using throw-aways as a general habit, but equally, do not be afraid of using them intelligently and sparingly where you feel appropriate or necessary.
When you are starting to learn then by all means use throw-away variables if it helps you break things down into understandable chunks. But try to break away from that sooner rather than later, as using throw-aways may at times make your code harder to maintain and modify. On the other hand, even when you are experienced you may sometimes find that it is worth using throwaways for the same reason : keep things readable and manageable for less experienced programmers. Purists may say that you should never use them, but I believe that when you consider the lifetime costs of software maintenance it is important to remember that readability is very important. Maybe this argument doesn't apply if you are lucky enough to work in an environment that only hires the best of the best, but for the rest of us that's simply not a reflection of the real world.
The bottom line : what is "right" depends on your skill level, the skill level of your peers, what you are doing, and the likely volatility, complexity, and lifetime of the code. Use your best judgement.
In response to the answer saying the question doesn't make sense, you would certainly think it made sense if you saw the article to which I'm referring. The point is to elegantly process a series of statements without redundant intermediate variables. Zed is right on target. I really wish I could find the original link because it was super detailed and went through 5 or 6 methods, some of which were referenced from the erlang mailing list, and weighed the pros and cons of each.

What is the optimal trade off between refactoring and increasing the call stack?

I'm looking at refactoring a lot of large (1000+ lines) methods into nice chunks that can then be unit tested as appropriate.
This started me thinking about the call stack, as many of my rafactored blocks have other refactored blocks within them, and my large methods may well have been called by other large methods.
I'd like to open this for discussion to see if refactoring can lead to call stack issues. I doubt it will in most cases, but wondered about refactored recursive methods and whether it would be possible to cause a stack overflow without creating an infinite loop?
Excluding recursion, I wouldn't worry about call stack issues until they appear (which they likely won't).
Regarding recursion: it must be carefully implemented and carefully tested no matter how it's done so this would be no different.
I guess it's technically possible. But not something that I would worry about unless it actually happens when I test my code.
When I was a kid, and computers had 64K of RAM, the call stack size mattered.
Nowadays, it's hardly worth discussing. Memory is huge, stack frames are small, a few extra function calls are hardly measurable.
As an example, Python has an artificially small call stack so it detects infinite recursion promptly. The default size is 1000 frames, but this is adjustable with a simple API call.
The only way to run afoul of the stack in Python is to tackle Project Euler problems without thinking. Even then, you typically run out of time before you run out of stack. (100 trillion loops would take far longer than a human lifespan.)
I think it's highly unlikely for you to get a stackoverflow without recursion when refactoring. The only way that I can see that this would happen is if you are allocating and/or passing a lot of data between methods on the stack itself.

Resources