can I use more than one stack in microprocessor?
and if I can,How can I progamming those?
Sure you can. Several CPU architectures have multiple stack pointers - even lowly 8-bit processors, such as the M6809. And even if the concept is not implemented in the CPU hardware, you can easily create multiple stacks in software. A stack pointer is basically simply an index register, so you could (for example) use the IX and IY registers of the Z80 to implement multiple stacks.
If your microprocesser has more than one hardware stack, then yes, you can. You would have to write assembler though, since no c/c++ implementation makes use of multiple stacks.
It would be easier to help if you could say exactly what architecture you're talking about.
As for the how of it. Generally there is a special register or memory location that is used to point to the stack. Using another stack is just as simple as setting this value. This is all processor and architecture dependent so it depends on the one you are using.
On some platforms, the stack used for return addresses is entirely separate from the one used for parameter passing. Indeed, on some platforms, C compilers don't permit recursion and don't use any stack for parameter passing. Frankly, I like such designs, since they minimize the likelihood of stack problems causing errant program behavior.
Related
It's simple yet fast and effective because of the locality property.
You also manage the memory, a finite resource, by adjusting just one pointer.
I think it's a brilliant idea.
Who first come up with the idea of the call stack?
Since when does computers have supporting instructions for stack?
Is there any historically significant paper on it?
As far as I know, computers have used a call stack since its earliest days.
Stacks themselves were first proposed by Alan Turing, dating back to 1946. I believe that stacks were first used as a theoretical concept to define pushdown automatons.
The first article about call stacks I could find was written by Dijkstra on Numerische Mathematik journal, titled "Recursive Programming" (http://link.springer.com/article/10.1007%2FBF01386232).
Also note that the call stack is there mainly because of recursion. It may be difficult to actually know who really got the idea for the call stack in the first place, since it is pretty intuitive that a stack is needed if you want to support recursion. Consider this quote from Expert C Programming - Deep C Secrets, by Peter Van Der Linden:
A stack would not be needed except for recursive calls. If not for
these, a fixed amount of space for local variables, parameters, and
return addresses would be known at compile time and could be allocated
in the BSS. [...] Allowing recursive calls means that we must find a
way to permit multiple instances of local variables to be in existence
at one time, though only the most recently created will be accessed -
the classic specification of a stack.
This is from chapter 6, page 143 / 144 - if you like this kind of stuff, I highly recommend you to read it.
It is easy to understand that a stack is probably the right structure to use when one wants to keep track of the call chain, since function calls on hold will return in a LIFO manner.
recently my friend attended intv, he faced this question(intviewer made this up from my fren's answer to another question)
Say, we have option to use either
1) recursion --> uses system stack, i think OS takes care of everything
2) use our own stack for only data part and get things done.
to fix something. Which one do you prefer? and why?
assume stack size wouldn't grow beyond 100.
I would use the system stack. Why re-invent the wheel?
Function calls, while not really slow per se, do take non-zero time. Therefore an iterative solution can be slightly faster.
More often thatn not, simplicity is better than a slight performance gain.
Dont overkill a solution, and loose maitainability/readability for 1ms if you are not going to use that 1ms.
Just remember that whatever clever little hack you put together has to be maintained (and proven to work first for that matter) where as many standard/system solutions are available, that has been proven. (see Reinventing the wheel).
If it is really system crytical that you reduce memory allocation and enhance performance, you have your work cut out for you, and be prepared to spend some time proving that your solution is better/faster and stable.
Interesting to see the general preference for recursion on here, and a few who assume that the recursive implementation will necessarily be clearer or more maintainable... maybe, maybe not :-).
recursion typically avoids an explicit loop
recursion can sometimes simply use local variables inside the function to avoid a container storing results as they're calculated
recursion can make it trivial to reverse the order in which sub-results are gathered
recursion means there's a limit to the depth of information being processed, where-as often a loop implementation easily avoids this, or at least has memory requirements that more accurately reflect the data-processing needs
the more widely applicable you want your software to be, the more important it is to remove arbitrary limits (e.g. UNIX software like modern vim, less, GNU grep etc. make minimal assumptions about file/line/expression length and dynamically attempt whatever they're asked / many here will remember old editors and vendor-specific utilities e.g. one "celestial" company's grep that would never match results at the end of a too-long line, editors that SIGSEGVed, shutdown, corrupted or slowed down into uselessness on long lines or files)
naive recursion can result in spectacularly inefficiently combined sub-results
some people find recursion easier to understand, some find it harder - definitely it suits how we think about some problems better than others
Depends on the algorithm. Small stack usage, system stack. Lot of stack needed, go on the heap. Stack size is limited by OS beyond which OS throws stackoverflow ;-) If algo uses more stack space then I would go with stack data structure and push the data on the heap
Hm, I think it deppends the problem...
The stack size, if I got your point, is not only what limits you from using one or another.
But wanting to use recursion... well, no bads, really, for the length of the stack, but I'd rather make my own solution.
Avoid recursion when you can. :)
Recursion may be the simplest way to solve a particular problem. An iterative solution can required more code and more opportunities for errors. The testing and maintenance cost may be greater than the performance benefit.
I would go with the first, use the system stack. That being said the language FORTH there are two system stacks. One is the return stack and the other is the parameters stack. This offers some nice flexibility.
I'm looking at refactoring a lot of large (1000+ lines) methods into nice chunks that can then be unit tested as appropriate.
This started me thinking about the call stack, as many of my rafactored blocks have other refactored blocks within them, and my large methods may well have been called by other large methods.
I'd like to open this for discussion to see if refactoring can lead to call stack issues. I doubt it will in most cases, but wondered about refactored recursive methods and whether it would be possible to cause a stack overflow without creating an infinite loop?
Excluding recursion, I wouldn't worry about call stack issues until they appear (which they likely won't).
Regarding recursion: it must be carefully implemented and carefully tested no matter how it's done so this would be no different.
I guess it's technically possible. But not something that I would worry about unless it actually happens when I test my code.
When I was a kid, and computers had 64K of RAM, the call stack size mattered.
Nowadays, it's hardly worth discussing. Memory is huge, stack frames are small, a few extra function calls are hardly measurable.
As an example, Python has an artificially small call stack so it detects infinite recursion promptly. The default size is 1000 frames, but this is adjustable with a simple API call.
The only way to run afoul of the stack in Python is to tackle Project Euler problems without thinking. Even then, you typically run out of time before you run out of stack. (100 trillion loops would take far longer than a human lifespan.)
I think it's highly unlikely for you to get a stackoverflow without recursion when refactoring. The only way that I can see that this would happen is if you are allocating and/or passing a lot of data between methods on the stack itself.
Most of the literature on Virtual Memory point out that the as a Application developer,understanding Virtual Memory can help me in harnessing its powerful capabilities. I have been involved in developing applications on Linux for sometime but but didn't care about Virtual Memory intricacies while I code. Am I missing something? If so, please shed some light on how I can leverage the workings of Virtual Memory. Else let me know if am I not making sense with the question!
Well, the concept is pretty simple actually. I won't repeat it here, but you should pick up any book on OS design and it will be explained there. I recommend the "Operating System Concepts" from Silberscahtz and Galvin - it's what I had to use in the University and it's good.
A couple of things that I can think of what Virtual Memory knowledge might give you are:
Learning to allocate memory on page boundaries to avoid waste (applies only to virtual memory, not the usual heap/stack memory);
Lock some pages in RAM so they don't get swapped to HDD;
Guardian pages;
Reserving some address range and committing actual memory later;
Perhaps using the NX (non-executable) bit to increase security, but im not sure on this one.
PAE for accessing >4GB on a 32-bit system.
Still, all of these things would have uses only in quite specific scenarios. Indeed, 99% of applications need not concern themselves about this.
Added: That said, it's definately good to know all these things, so that you can identify such scenarios when they arise. Just beware - with power comes responsibility.
It's a bit of a vague question.
The way you can use virtual memory, is chiefly through the use of memory-mapped files. See the mmap() man page for more details.
Although, you are probably using it implicitly anyway, as any dynamic library is implemented as a mapped file, and many database libraries use them too.
The interface to use mapped files from higher level languages is often quite inconvenient, which makes them less useful.
The chief benefits of using mapped files are:
No system call overhead when accessing parts of the file (this actually might be a disadvantage, as a page fault probably has as much overhead anyway, if it happens)
No need to copy data from OS buffers to application buffers - this can improve performance
Ability to share memory between processes.
Some drawbacks are:
32-bit machines can run out of address space easily
Tricky to handle file extending correctly
No easy way to see how many / which pages are currently resident (there may be some ways however)
Not good for real-time applications, as a page fault may cause an IO request, which blocks the thread (the file can be locked in memory however, but only if there is enough).
May be 9 out of 10 cases you need not worry about virtual memory management. That's the job of the kernel. May be in some highly specialized applications do you need to tweak around them.
I know of one article that talks about computer memory management with an emphasis on Linux [ http://lwn.net/Articles/250967 ]. Hope this helps.
For most applications today, the programmer can remain unaware of the workings of computer memory without any harm. But sometimes -- for example the case when you want to improve the footprint of your program -- you do end up having to manipulate memory yourself. In such situations, knowing how memory is designed to work is essential.
In other words, although you can indeed survive without it, learning about virtual memory will only make you a better programmer.
And I would think the Wikipedia article can be a good start.
If you are concerned with performance -- understanding memory hierarchy is important.
For small data sets which are fully contained in physical memory you need to be concerned with caching (accessing memory from the cache is much faster).
When dealing with large data sets -- which may be paged out due to lack of physical memory you need to be careful to keep your access patterns localized.
For example if you declare a matrix in C (int a[rows][cols]), it is allocated by rows. Thus when scanning the matrix, you need to scan by rows rather than by columns. Otherwise you will be paging the same data in and out many times.
Another issue is the difference between dirty and clean data held in memory. Clean data is information loaded from file that was not modified by the program. The OS may page out clean data (perhaps depending on how it was loaded) without writing it to disk. Dirty pages must first be written to the swap file.
What is "tagged memory" and how does it help in reducing program size?
You may be referring to a tagged union, or more specifically a hardware implementation like the tagged architecture used in LISP machines. Basically a method for storing data with type information.
In a LISP machine, this was done in-memory by using a longer word length and using some of the extra bits to store type information. Handling and checking of tags was done implicitly in hardware.
For a type-safe C++ implementation, see boost:variant.
Not sure, but it is possible that you are referring to garbage collection, which is the process of automatically disposing of no longer used objects created when running a program.
"Tagged memory" can be a synonym for mark-and-sweep, which is the most basic way to implement garbage collection.
If this is all wrong, please edit your question to clarify.
The Windows DDK makes use of "pool tags" when allocating memory out of the kernel page pool. It costs 4 bytes of memory per allocation, but allows you to label (i.e. tag) portions of kernel memory which might help with debugging and detecting memory leaks.
BTW I don't see how anything called "tagged memory" could reduce program code size. It sounds like extra work, which translates to "more code" and "bigger program." Maybe it's meant to reduce the memory footprint somehow?
Here's a more technical description going into the implementation details as to how this is used for garbage collection. You may also want to check out the wikipedia article about Tagged Pointers.