In a downward growing stack, what's the rationale for stack variables to be written in an upward direction? For example, if I have char buf[200], say at memory address 0x400. When I write to this array, I will write from 0x400 to 0x600, which is toward previous stack frames. This makes the program vulnerable to buffer overflows that can take control over the program by overwriting return pointers, etc. So why not just write the array from 0x600 to 0x400?
It doesn't matter; when you try to write beyond 200 bytes, you are still trying to write to an address that does not belong to the array (out of bounds), hence buffer overflow.
Related
The answer for the following question is O(n) for pop and O(1) for push. But i don't quite understand why pop cannot be O(1). We do have tail pointer pointing to the end of the linkedlist and we are supposed to access it in O(1) time right? Did i miss anything here?
What is the running time of the push and pop operations if the bottom of the Stack must be at the head of the linked memory structure? where n is the number of nodes in the structure.
Think about the invariants of your stack: tail points to the last element.
The pop operation removes the last element — this means we need to re-adjust tail. How do we do this? With a doubly-linked list we could just follow the pointer back to the previous node — but as your illustration clearly shows, there is no such arrow back to the previous node in a singly-linked list.
So instead we need to start at head (the only other node for which we hold a pointer), and iterate all the way until we arrive at the second-to-last node, and then set tail to point to that node.
I'm trying to understand the space complexity of moving elements from one stack to another stack.
I found this article on leetcode but there are some discrepancies.
Let's say if we move stack1 (1-2-3) to another stack2 by doing pop() and push() three times, do we consider O(1) extra space since we delete one element from stack1 and create an element in stack2 thus no extra space is used? Or we consider it as O(n) space complexity due to we created a stack2 same size as stack1(but stack1 is gone..)?
Thanks in advance!
That depends on how your stack is implemented. That is, what is the backing store?
If the stack is implemented as an array, then the stack will be initialized with some default capacity, and a current count of 0. Something like:
s = int[10];
count = 0;
When you push something onto the stack, the item is added and the count is incremented to 1. If you try to push more items than the array will hold, then the array is extended. (Actually, probably a new array is allocated and the existing contents copied to it.)
When you pop something, count is reduced by 1, and s[count] is returned. But the array allocation is not changed.
So if the stack is implemented as an array, then copying from one stack to another will require O(n) extra space. At least temporarily.
If the stack is implemented as a linked list:
head > node > node > node ...
Then typically pop just returns head, and the new head is the node that head previously pointed to. In this case, the memory occupied by the stack is reduced with each pop. Of course, adding to the new stack increases the memory allocation by the same amount. So if the stack is implemented as a linked list, then copying from one stack to another is O(1) extra space.
I have a question about dynamic memory allocation in c or c++!
When we want to figure out the size of an array we use sizeof function!
Furthermore,if we want to figure out the number of elements in which array has we do like this :
int a[20];
cout << sizeof(a) / sizeof(a[0]) << endl;
I was wondering if we could figure out the number and real size of memory which is allocated dynamically.
I would really appreciate it if you tell me how, or introduce me to a reference.
In your code, a[20] is statically allocated (in the stack). The memory used is always the size 20 * sizeof(int) and is freed at the end of the function.
When you allocate dynamically (in the heap) an array like this: int* a = malloc(20*sizeof(int)), it is your choice to take x amount of memory and fill it as you want. So it is your duty to increase or decrease the number of elements your data structure includes. This amount of memory is allocated until you free it (free(a)).
Anyway, the real size of the memory occupied is always the same (in our case 20*sizeof(int)).
See there for info about stack/heap, static/dynamic: Stack, Static, and Heap in C++
Can stack overflows be mitigated by having arrays grow upward instead of
down into the return address / stack frame?
Can stack overflows be mitigated by having arrays grow upward instead of down
Quite simply: No. Don't you think that this would be standard, if that were the case?
You seem to be conflating two different behaviors. I don't see what stack overflows have to do with arrays. In the case of infinite recursion: memory is not infinite, so at some point the code will need memory (for another stack frame) which is simply not available.
I have a question regards dynamic vertex and index buffers. Can I change their topology completely? For example, having one set of vertices in one frame, throw them away and recreate vertices with their own properties and count not equal to previous count of vertices. Also I want to know the same about index buffer, can I change the number of indices in dynamic index buffer?
So far in my application I have a warning when trying to update index buffer with larger size:
D3D11 WARNING: ID3D11DeviceContext::DrawIndexed: Index buffer has not enough space! [ EXECUTION WARNING #359: DEVICE_DRAW_INDEX_BUFFER_TOO_SMALL]
Changing the size of a buffer after creation is not possible.
A dynamic buffer allows you to update the data. You can write new data to it as long as it does not exceed the buffer's size.
But buffers don't care about data layout. A buffer with a size of 200 bytes can hold 100 shorts or 50 floats or mixed data; anything less or equal to 200 bytes.