Is subroutine executed without stack? Justify your answer with valid arguments
This is more of a homework (or exam) question; particularly the arguments expected will be in the lectures and/or course material.
In practice, many languages do both, but in such a way that it's indistinguishable from always using the stack, because the stack is needed to handle recursion (and, these days, reentrancy), and executing a subroutine without using the stack is treated purely as an optimisation (often, "inlining").
A few very old languages (eg FORTRAN and COBOL) default to not supporting recursion (much less reentrancy), and therefore may or may not use the stack unless a subroutine is specifically marked as "recursive". Whether they use the stack or not for non-recursive subroutines is up to the compiler (and may differ from version to version or even from subroutine to subroutine).
How this maps to the expected answer on your homework (or exam) depends on how these aspects were covered in the lectures and/or course materials; different courses will emphasise different parts, particularly if they deal with a specific programming language (eg. Python vs C/C++/C# vs FORTRAN).
Related
I now understand how virtual memory works and what is responsible for setting up this virtual memory. However, some days ago, I encountered memory segmentation that split up the address space into segments like data and text. I cannot find any clear, non-ambiguous resources (at least to me) that explains memory segmentation. For instance, I would like to know,
What is responsible for splitting up address spaces into segments?
How exactly does it work? Like how are segments translated to physical addresses, and what checks if an address within a certain segment has been accessed?
I have found this wiki article but it does not really answer such questions.
The term "segment" appears in at least two distinct memory contexts.
In ye olde days, segmentation was method used for memory protection. The Intel chips continued the use of segments for decades after they were obsolete. Intel finally dropped the used of segments in 64-bit mode but they still exist in vestigial form and they still exist in 32-bit mode.
That is the type of "segmentation" described in the wikipedia link.
The "code" and "data"-type segmentation is something entirely different. Another term for this is "program section."
When you link your code, the linker usually groups memory with the same attributes into "program sections" (aka "segments"). Typically you will have memory that:
Is read only/execute (code)
read/write and initialized to zero
read/write and initialized to specified values
Read only
In order to control the grouping of related memory, linkers generally used named segments/program sections. A linker may, by default, create a program section/segment called "Code" and place all the executable code in that segment. It make create, by default, a segment called "Data" and place the read only data in that segment.
Powerful linkers allow the programmer to override these. Some assembly languages and system languages allow you to specify program sections.
"Segments" in this context only exist only in the linking process. There is no area in memory marked "Code" or "Data" (unless you are using the olde Intel system).
What is responsible for splitting up address spaces into segments?
The address space is not split up into segments of this second type on modern systems (ie those designed after 1970 and not from Intel). Some confusing books use this as a pedagogical concept in diagrams. A process can (and usually does) have code pages interspersed with data pages.
Like how are segments translated to physical addresses, and what checks if an address within a certain segment has been accessed?
That question relates to the use of the term "Segment" described at the top. That translation is done using hardware registers.
Well, to be honest I prefer you to consult books that have basics and thorough materials rather reading articles. Because, their content is specific and of above basic level (to me).
Every term in your question is a separate topic that are very well described in bellow reference. If you really want answers and clear concepts then you should go through this:
Read out Abraham Silberschatz's "Operating system concepts".
Chapter 8: Memory Management
Sub topics: Paging basic method and hardware support, Segmentation
I'm writing a joke language that is based on stack operations. I've tried to find the minimum amount of instructions necessary to make it Turing complete, but have no idea if a language based on one stack can even be Turing complete. Will these instructions be enough?
IF (top of stack is non-zero)
WHILE (top of stack is non-zero)
PUSH [n-bit integer (where n is a natural number)]
POP
SWAP (top two values)
DUPLICATE (top value)
PLUS (adds top two values, pops them, and pushes result)
I've looked at several questions and answers (like this one and this one) and believe that the above instructions are sufficient. Am I correct? Or do I need something else like function calls, or variables, or another stack?
If those instructions are sufficient, are any of them superfluous?
EDIT: By adding the ROTATE command (changes the top three values of the stack from A B C to B C A) and eliminating the DUPLICATE, PLUS, and SWAP commands it is possible to implement a 3 character version of the Rule 110 cellular automaton. Is this sufficient to prove Turing completeness?
If there is an example of a Turing complete one-stack language without variables or functions that would be great.
If you want to prove that your language is Turing complete, then you should look at this Q&A on the Math StackExchange site.
How to Prove a Programming Language is Turing Complete?
One approach is to see if you can write a program using your language that can simulate an arbitrary Turing Machine. If you can, that is a proof of Turing completeness.
If you want to know if any of those instructions are superfluous, see if you can simplify your TM emulator to not use one of the instructions.
But if you want to know if a smaller Turing complete language is possible, look at SKI Combinator Calculus. Arguably, there are three instructions: the S, K and I combinators. And I is apparently redundant.
A language based only on a single stack can't be Turing complete (unless you "cheat" by allowing stuff like temporary variables or access to values "deeper" in the stack than the top item). Such a language is, as I understand it, equivalent to a Pushdown Automata, which can implement some stuff (e.g. context-free grammars) but certainly not as much as a full Turing machine.
With that said, Turing machines are actually a much lower bar than you'd intuitively expect - as originally formulated, they were little more than a linked list, the ability to read and modify the linked list, and branching. You don't even need to add all that much to a purely stack-oriented language to make it equivalent to a Turing machine - a second stack will technically do it (although I certainly wouldn't want to program against it), as would a linked list or queue.
Correct me if I'm wrong, but I'd think that establishing that you can read from and write to memory, can do branching, and have at least one of those data structures (two stacks, one queue, one linked list, or the equivalent) would be adequate to establish Turing completeness.
Take a look, too, at nested stack automata.
You may also want to look at the Chomsky hierarchy (it seems like you may be floating somewhere in the vicinity of a Type 1 or a Type 2 language).
As others have pointed, if you can simulate any Turing machine, then your language is Turing-complete.
Yet Turing machines, despite their conceptual simplicity and their amenity to mathematical treatment, are not the easiest machines to simulate.
As a shortcut, you can simulate some simple language that has already been proved Turing-complete.
My intuition tells me that a functional language, particularly LISP, might be a good choice. This SO Q&A has pointers to what a minimum Turing-complete LISP looks like.
It's simple yet fast and effective because of the locality property.
You also manage the memory, a finite resource, by adjusting just one pointer.
I think it's a brilliant idea.
Who first come up with the idea of the call stack?
Since when does computers have supporting instructions for stack?
Is there any historically significant paper on it?
As far as I know, computers have used a call stack since its earliest days.
Stacks themselves were first proposed by Alan Turing, dating back to 1946. I believe that stacks were first used as a theoretical concept to define pushdown automatons.
The first article about call stacks I could find was written by Dijkstra on Numerische Mathematik journal, titled "Recursive Programming" (http://link.springer.com/article/10.1007%2FBF01386232).
Also note that the call stack is there mainly because of recursion. It may be difficult to actually know who really got the idea for the call stack in the first place, since it is pretty intuitive that a stack is needed if you want to support recursion. Consider this quote from Expert C Programming - Deep C Secrets, by Peter Van Der Linden:
A stack would not be needed except for recursive calls. If not for
these, a fixed amount of space for local variables, parameters, and
return addresses would be known at compile time and could be allocated
in the BSS. [...] Allowing recursive calls means that we must find a
way to permit multiple instances of local variables to be in existence
at one time, though only the most recently created will be accessed -
the classic specification of a stack.
This is from chapter 6, page 143 / 144 - if you like this kind of stuff, I highly recommend you to read it.
It is easy to understand that a stack is probably the right structure to use when one wants to keep track of the call chain, since function calls on hold will return in a LIFO manner.
Using non-deterministic functions is unavoidable in applications that talk to the real world. Making a clear separation between deterministic and non-deterministic is important.
Haskell has the IO monad that sets the impure context by looking at which we know that everything outside of it is pure. Which is nice, if you ask me, when it comes to unit testing one can tell which part of their code is ultimately testable and which is not.
I could not find anything that allows separating the two in F#. Does it mean there is just no way to do that?
The distinction between deterministic and non-deterministic function is not captured by the F# type system, but a typical F# system that needs to deal with non-determinism would use some structure (or "design pattern") that clearly separates the two.
If your core model is some computation that does not interact with the world (you only need to collect inputs and run the computation), then you can write most of your code as functional transformations on immutable data structures and then invoke these from some "main" I/O loop.
If you're writing some highly interactive or reactive application then you can use F# agents (here is an introductory article) and structure your application so that the non-determinism is safely contained in individual agents (see more about agent-based architectures)
F# is based on OCaml, and much like OCaml it isn't pure FP. I don't believe there is away to accomplish your goal in either language.
One way to manage it could be in making up a nominal type that would represent the notion of the real world and make sure each non-deterministic function takes its singleton as a parameter. This way all dependent functions would have to pass it on along the line. This makes a strong distinction between the two at a cost of some discipline and a bit of extra typing. Good thing about this approach is that it can verified by the compiler given necessary conditions are met.
can I use more than one stack in microprocessor?
and if I can,How can I progamming those?
Sure you can. Several CPU architectures have multiple stack pointers - even lowly 8-bit processors, such as the M6809. And even if the concept is not implemented in the CPU hardware, you can easily create multiple stacks in software. A stack pointer is basically simply an index register, so you could (for example) use the IX and IY registers of the Z80 to implement multiple stacks.
If your microprocesser has more than one hardware stack, then yes, you can. You would have to write assembler though, since no c/c++ implementation makes use of multiple stacks.
It would be easier to help if you could say exactly what architecture you're talking about.
As for the how of it. Generally there is a special register or memory location that is used to point to the stack. Using another stack is just as simple as setting this value. This is all processor and architecture dependent so it depends on the one you are using.
On some platforms, the stack used for return addresses is entirely separate from the one used for parameter passing. Indeed, on some platforms, C compilers don't permit recursion and don't use any stack for parameter passing. Frankly, I like such designs, since they minimize the likelihood of stack problems causing errant program behavior.