What is the address space in Go(lang)? - memory

I try to understand the basics of concurrent programming in Go. Almost all articles use the term "address space", for example: "All goroutines share the same address space". What does it mean?
I've tried to understand the following topics from wiki, but it wasn't successful:
http://en.wikipedia.org/wiki/Virtual_memory
http://en.wikipedia.org/wiki/Memory_segmentation
http://en.wikipedia.org/wiki/Page_(computer_memory)
...
However at the moment it's difficult to understand for me, because my knowledges in areas like memory management and concurrent programming are really poor. There are many unknown words like segments, pages, relative/absolute addresses, VAS etc.
Could anybody explain to me the basics of the problem? May be there are some useful articles, that I can't find.

Golang spec:
A "go" statement starts the execution of a function call as an independent concurrent thread of control, or goroutine, within the same address space.
Could anybody explain to me the basics of the problem?
"Address space" is a generic term which can apply to many contexts:
Address spaces are created by combining enough uniquely identified qualifiers to make an address unambiguous (within a particular address space)
Dave Cheney's presentation "Five things that make Go fast" illustrates the main issue addressed by having goroutine within the same process address space: stack management.
Dave's qualifies the "address space", speaking first of thread:
Because a process switch can occur at any point in a process’ execution, the operating system needs to store the contents of all of these registers because it does not know which are currently in use.
This lead to the development of threads, which are conceptually the same as processes, but share the same memory space.
(so this is about memory)
Then Dave illustrates the stack within a process address space (the addresses managed by a process):
Traditionally inside the address space of a process,
the heap is at the bottom of memory, just above the program (text) and grows upwards.
The stack is located at the top of the virtual address space, and grows downwards.
See also "What and where are the stack and heap?".
The issue:
Because the heap and stack overwriting each other would be catastrophic, the operating system usually arranges to place an area of unwritable memory between the stack and the heap to ensure that if they did collide, the program will abort.
With threads, that can lead to restrict the heap size of a process:
as the number of threads in your program increases, the amount of available address space is reduced.
goroutine uses a different approach, while still sharing the same process address space:
what about the stack requirements of those goroutines ?
Instead of using guard pages, the Go compiler inserts a check as part of every function call to check if there is sufficient stack for the function to run. If there is not, the runtime can allocate more stack space.
Because of this check, a goroutines initial stack can be made much smaller, which in turn permits Go programmers to treat goroutines as cheap resources.
Go 1.3 introduces a new way of managing those stacks:
Instead of adding and removing additional stack segments, if the stack of a goroutine is too small, a new, larger, stack will be allocated.
The old stack’s contents are copied to the new stack, then the goroutine continues with its new larger stack.
After the first call to H the stack will be large enough that the check for available stack space will always succeed.

When you application runs on the RAM, addresses in RAM are allocated to your application by the memory manager. This is refered to as address space.
Concept:
the processor (CPU) executes instructions in a Fetch-Decode-Execute
cycle. It executes instructions in an applicaiton by fetching it to
the RAM (Random Acces Memory). This is done because it is very
in-efficient to get it all the way from disk. Some-one needs to keep
track of memory usage, so the operating system implements a memory
manager. Your appication, consists of some program, in your case this
is written in Go programming language. When you execute your script,
the OS executes the instructions in the above mentioned fashion.
Reading your post i can empathize. The terms you mentioned will become familiar to you as program more and more.
I first encountered these terms from the operating systems book, a.k.a the dinosaur book.
Hope this helps you.

Related

Operating Systems: Processes, Pagination and Memory Allocation doubts

I have several doubts about processes and memory management. List the main. I'm slowly trying to solve them by myself but I would still like some help from you experts =).
I understood that the data structures associated with a process are more or less these:
text, data, stack, kernel stack, heap, PCB.
If the process is created but the LTS decides to send it to secondary memory, are all the data structures copied for example on SSD or maybe just text and data (and PCB in kernel space)?
Pagination allows you to allocate processes in a non-contiguous way:
How does the kernel know if the process is trying to access an illegal memory area? After not finding the index on the page table, does the kernel realize that it is not even in virtual memory (secondary memory)? If so, is an interrupt (or exception) thrown? Is it handled immediately or later (maybe there was a process switch)?
If the processes are allocated non-contiguously, how does the kernel realize that there has been a stack overflow since the stack typically grows down and the heap up? Perhaps the kernel uses virtual addresses in PCBs as memory pointers that are contiguous for each process so at each function call it checks if the VIRTUAL pointer to the top of the stack has touched the heap?
How do programs generate their internal addresses? For example, in the case of virtual memory, everyone assumes starting from the address 0x0000 ... up to the address 0xffffff ... and is it then up to the kernel to proceed with the mapping?
How did the processes end? Is the system call exit called both in case of normal termination (finished last instruction) and in case of killing (by the parent process, kernel, etc.)? Does the process itself enter kernel mode and free up its associated memory?
Kernel schedulers (LTS, MTS, STS) when are they invoked? From what I understand there are three types of kernels:
separate kernel, below all processes.
the kernel runs inside the processes (they only change modes) but there are "process switching functions".
the kernel itself is based on processes but still everything is based on process switching functions.
I guess the number of pages allocated the text and data depend on the "length" of the code and the "global" data. On the other hand, is the number of pages allocated per heap and stack variable for each process? For example I remember that the JVM allows you to change the size of the stack.
When a running process wants to write n bytes in memory, does the kernel try to fill a page already dedicated to it and a new one is created for the remaining bytes (so the page table is lengthened)?
I really thank those who will help me.
Have a good day!
I think you have lots of misconceptions. Let's try to clear some of these.
If the process is created but the LTS decides to send it to secondary memory, are all the data structures copied for example on SSD or maybe just text and data (and PCB in kernel space)?
I don't know what you mean by LTS. The kernel can decide to send some pages to secondary memory but only on a page granularity. Meaning that it won't send a whole text segment nor a complete data segment but only a page or some pages to the hard-disk. Yes, the PCB is stored in kernel space and never swapped out (see here: Do Kernel pages get swapped out?).
How does the kernel know if the process is trying to access an illegal memory area? After not finding the index on the page table, does the kernel realize that it is not even in virtual memory (secondary memory)? If so, is an interrupt (or exception) thrown? Is it handled immediately or later (maybe there was a process switch)?
On x86-64, each page table entry has 12 bits reserved for flags. The first (right-most bit) is the present bit. On access to the page referenced by this entry, it tells the processor if it should raise a page-fault. If the present bit is 0, the processor raises a page-fault and calls an handler defined by the OS in the IDT (interrupt 14). Virtual memory is not secondary memory. It is not the same. Virtual memory doesn't have a physical medium to back it. It is a concept that is, yes implemented in hardware, but with logic not with a physical medium. The kernel holds a memory map of the process in the PCB. On page fault, if the access was not within this memory map, it will kill the process.
If the processes are allocated non-contiguously, how does the kernel realize that there has been a stack overflow since the stack typically grows down and the heap up? Perhaps the kernel uses virtual addresses in PCBs as memory pointers that are contiguous for each process so at each function call it checks if the VIRTUAL pointer to the top of the stack has touched the heap?
The processes are allocated contiguously in the virtual memory but not in physical memory. See my answer here for more info: Each program allocates a fixed stack size? Who defines the amount of stack memory for each application running?. I think stack overflow is checked with a page guard. The stack has a maximum size (8MB) and one page marked not present is left underneath to make sure that, if this page is accessed, the kernel is notified via a page-fault that it should kill the process. In itself, there can be no stack overflow attack in user mode because the paging mechanism already isolates different processes via the page tables. The heap has a portion of virtual memory reserved and it is very big. The heap can thus grow according to how much physical space you actually have to back it. That is the size of the swap file + RAM.
How do programs generate their internal addresses? For example, in the case of virtual memory, everyone assumes starting from the address 0x0000 ... up to the address 0xffffff ... and is it then up to the kernel to proceed with the mapping?
The programs assume an address (often 0x400000) for the base of the executable. Today, you also have ASLR where all symbols are kept in the executable and determined at load time of the executable. In practice, this is not done much (but is supported).
How did the processes end? Is the system call exit called both in case of normal termination (finished last instruction) and in case of killing (by the parent process, kernel, etc.)? Does the process itself enter kernel mode and free up its associated memory?
The kernel has a memory map for each process. When the process dies via abnormal termination, the memory map is crossed and cleared off of that process's use.
Kernel schedulers (LTS, MTS, STS) when are they invoked?
All your assumptions are wrong. The scheduler cannot be called otherwise than with a timer interrupt. The kernel isn't a process. There can be kernel threads but they are mostly created via interrupts. The kernel starts a timer at boot and, when there is a timer interrupt, the kernel calls the scheduler.
I guess the number of pages allocated the text and data depend on the "length" of the code and the "global" data. On the other hand, is the number of pages allocated per heap and stack variable for each process? For example I remember that the JVM allows you to change the size of the stack.
The heap and stack have portions of virtual memory reserved for them. The text/data segment start at 0x400000 and end wherever they need. The space reserved for them is really big in virtual memory. They are thus limited by the amount of physical memory available to back them. The JVM is another thing. The stack in JVM is not the real stack. The stack in JVM is probably heap because JVM allocates heap for all the program's needs.
When a running process wants to write n bytes in memory, does the kernel try to fill a page already dedicated to it and a new one is created for the remaining bytes (so the page table is lengthened)?
The kernel doesn't do that. On Linux, the libstdc++/libc C++/C implementation does that instead. When you allocate memory dynamically, the C++/C implementation keeps track of the allocated space so that it won't request a new page for a small allocation.
EDIT
Do compiled (and interpreted?) Programs only work with virtual addresses?
Yes they do. Everything is a virtual address once paging is enabled. Enabling paging is done via a control register set at boot by the kernel. The MMU of the processor will automatically read the page tables (among which some are cached) and will translate these virtual addresses to physical ones.
So do pointers inside PCBs also use virtual addresses?
Yes. For example, the PCB on Linux is the task_struct. It holds a field called pgd which is an unsigned long*. It will hold a virtual address and, when dereferenced, it will return the first entry of the PML4 on x86-64.
And since the virtual memory of each process is contiguous, the kernel can immediately recognize stack overflows.
The kernel doesn't recognize stack overflows. It will simply not allocate more pages to the stack then the maximum size of the stack which is a simple global variable in the Linux kernel. The stack is used with push pops. It cannot push more than 8 bytes so it is simply a matter of reserving a page guard for it to create page-faults on access.
however the scheduler is invoked from what I understand (at least in modern systems) with timer mechanisms (like round robin). It's correct?
Round-robin is not a timer mechanism. The timer is interacted with using memory mapped registers. These registers are detected using the ACPI tables at boot (see my answer here: https://cs.stackexchange.com/questions/141870/when-are-a-controllers-registers-loaded-and-ready-to-inform-an-i-o-operation/141918#141918). It works similarly to the answer I provided for USB (on the link I provided here). Round-robin is a scheduler priority scheme often called naive because it simply gives every process a time slice and executes them in order which is not currently used in the Linux kernel (I think).
I did not understand the last point. How is the allocation of new memory managed.
The allocation of new memory is done with a system call. See my answer here for more info: Who sets the RIP register when you call the clone syscall?.
The user mode process jumps into a handler for the system call by calling syscall in assembly. It jumps to an address specified at boot by the kernel in the LSTAR64 register. Then the kernel jumps to a function from assembly. This function will do the stuff the user mode process requires and return to the user mode process. This is often not done by the programmer but by the C++/C implementation (often called the standard library) that is a user mode library that is linked against dynamically.
The C++/C standard library will keep track of the memory it allocated by, itself, allocating some memory and by keeping records. Then, if you ask for a small allocation, it will use the pages it already allocated instead of requesting new ones using mmap (on Linux).

Committed vs Reserved Memory

According to "Windows Internals, Part 1" (7th Edition, Kindle version):
Pages in a process virtual address space are either free, reserved, committed, or shareable.
Focusing only on the reserved and committed pages, the first type is described in the same book:
Reserving memory means setting aside a range of contiguous virtual addresses for possible future use (such as an array) while consuming negligible system resources, and then committing portions of the reserved space as needed as the application runs. Or, if the size requirements are known in advance, a process can reserve and commit in the same function call.
Both reserving or committing will initially get you entries in the VADs (virtual address descriptors), but neither operation will touch the PTE (page table entries) structures. It used to cost PTEs for reserving before Windows 8.1, but not anymore.
As described above, reserved means blocking a range of virtual addresses, NOT blocking physical memory or paging file space at the OS level. The OS doesn't include this in the commit limit, therefore when the time comes to allocate this memory, you might get a surprise. It's important to note that reserving happens from the perspective of the process address space. It's not that there's any physical resource reserved - there's no stamping of "no vacancy" against RAM space or page file(s).
The analogy with plots of land might be missing something: take reserved as the area of land surrounded by wooden poles, thus letting others now that the land is taken. But how about committed ? It can't be land on which structures (eg houses) have already been build, since those would require PTEs and there's none there yet, since we haven't accessed anything. It's only when touching committed data that the PTEs will get built, which will make the pages available to the process.
The main problem is that committed memory - at least in its initial state - is functionally very much alike reserved memory. It's just an area blocked within VADs. Try to touch one of the addresses, and you'll get an access violation exception for a reserved address:
Attempting to access free or reserved memory results in an access violation exception because the page isn’t mapped to any storage that can resolve the reference
...and an initial page fault for a committed one (immediately followed by the required PTE entries being created).
Back to the land analogy, once houses are build, that patch of land is still committed. Yet this is a bit peculiar, since it was still committed when the original grass was there, before the very first shovel was excavated to start construction. It resembled the same state as that of a reserved patch. Maybe it would be better to think of it like terrain eligible for construction. Eg you have a permit to build (albeit you might never build as much as a wall on that patch of land).
What would be the reasons for using one type of memory versus the other ? There's at least one: the OS guarantees that there will be room to allocate committed memory, should that ever occur in the future, but doesn't guarantee anything for reserved memory aside from blocking that process' address space range. The only downside for committed memory is that one or more paging files might need to be extended in size as to be able to make the commit limit take into account the recently allocated block, so should the requester demand the use of part of all the data in the future, the OS can provide access to it.
I can't really think how the land analogy can capture this detail of "guarantee". After all, the reserved patch also physically existed, covered by the same grass as a committed one in its pristine state.
The stack is another scenario where reserved and committed memory are used together:
When a thread is created, the memory manager automatically reserves a predetermined amount of virtual memory, which by default is 1 MB.[...] Although 1 MB is reserved, only the first page of the stack will be committed [...]
along with a guard page. When a thread’s stack grows large enough to touch the guard page, an exception occurs, causing an attempt to allocate another guard. Through this mechanism, a user stack doesn’t immediately consume all 1 MB of committed memory but instead grows with demand."
There is an answer here that deals with why one would want to use reserved memory as opposed to committed . It involves storing continuously expanding data - which is actually the stack model described above - and having specific absolute address ranges available when needed (although I'm not sure why one would want to do that within a process).
Ok, what am I actually asking ?
What would be a good analogy for the reserved/committed concept ?
Any other reason aside those depicted above that would mandate the
use of reserved memory ? Are there any interesting use cases when
resorting to reserved memory is a smart move ?
Your question hits upon the difference between logical memory translation and virtual memory translation. While CPU documentation likes to conflate these two concepts, they are different in practice.
If you look at logical memory translation, there are are only two states for a page. Using your terminology, they are FREE and COMMITTED. A free page is one that has no mapping to a physical page frame and a COMMITTED page has such a mapping.
In a virtual memory system, the operating system has to maintain a copy of the address space in secondary storage. How this is done depends upon the operating system. Typically, a process will have its mapping to several different files for secondary storage. The operating system divides the address space into what is usually called a SECTION.
For example, the code and read only data could be stored virtually as one or more SECTIONS in the executable file. Code and static data in shared libraries could each be in a different section that are paged to the shared libraries. You might have a map to a shared filed to the process that uses memory that can be accessed by multiple processes that forms another section. Most of the read/write data is likely to be in a page file in one or more sections. How the operating system tracks where it virtually stores each section of data is system dependent.
For windows, that gives the definition of one of your terms: Sharable. A sharable section is one where a range of addresses can be mapped to different processes, at different (or possibly the same) logical addresses.
Your last term is then RESERVED. If you look at the Windows' VirtualAlloc function documentation, you can see that (among your options) you can RESERVE or COMMIT. If you reserve you are creating a section of VIRTUAL MEMORY that has no mapping to physical memory.
This RESERVE/COMMIT model is Windows-specific (although other operating systems may do the same). The likely reason was to save disk space. When Windows NT was developed, 600MB drives the size of washing machine were still in use.
In these days of 64-bit address spaces, this system works well for (as you say) expanding data. In theory, an exception handler for a stack overrun can simply expand the stack. Reserving 4GB of memory takes no more resources than reserving a single page (which would not be practicable in a 32-bit system—see above). If you have 20 threads, this makes reserving stack space efficient.
What would be a good analogy for the reserved/committed concept ?
One could say RESERVE is like buying options to buy and COMMIT is exercising the option.
Any other reason aside those depicted above that would mandate the use of reserved memory ? Are there any interesting use cases when resorting to reserved memory is a smart move ?
IMHO, the most likely places to RESERVE without COMMITTING are for creating stacks and heaps with the former being the most important.

What is the real memory structure? [duplicate]

This question already has answers here:
Why do stacks typically grow downwards?
(11 answers)
Closed 5 years ago.
I have a stupid doubt about something related to memory. My doubt is: Why in memory, the higher addresses are considered in the "bottom", and the lowest addresses are considered in the "top"? Im going to explain in more detail:
The stack memory starts in high addresses and grows to lower addresses. So far this is what I understood, but why does the stack grow "up"? Why are the lower addresses located in the top of the memory?
I've seen various, contradictory memory structures: ones which consider the lower addresses at the bottom of the memory, and ones which consider the lower addresses at the top of the memory. Does it depend on the processor?
Thank you in advance.
It depends upon the instruction set architecture and the ABI. They both influence the organization (and growth direction) of the call stack.
Usually the call stack grows downwards, but there have been ISAs where that is not the case (see this). And some ISAs (e.g. IBM serie Z mainframes) don't have any hardware call stack (then, the call stack is just an ABI convention about register usage).
Most application software (e.g. your game, word processor, compiler, ...) are running above some operating system, in some process having some virtual address space (so in virtual memory).
Read some book on OSes, e.g. Operating Systems: Three Easy Pieces.
In practice (unless you code some operating system kernel managing virtual memory) you care mostly about the virtual address space (often made of numerous discontinuous segments). On Linux, use /proc/ (see proc(5)) to explore it (e.g. try cat /proc/$$/maps in your terminal). And notice that for a multi-threaded application each thread has its own call stack. Then "top" or "bottom" of the virtual address space don't really matter and don't have much sense.
If (as most people) you are writing (in any programming language other than the assembler) some application software above some OS, you don't care (as a developer) about real memory, but about virtual memory and resident set size. You don't care about the stack growth (it is managed by the OS, compiler, ISA, ... for the automatic variables of your code). You need to avoid stack overflow. It often happens that some pages (e.g. of your code segment[s]), perhaps those containing never used code, never get into RAM and stay paged out. And in practice most of the (virtual) memory of some process is not for its call stack: you usually allocate memory in the heap. My firefox browser (on my Linux desktop) has a virtual address space of 2.3 gigabytes (in more than a thousand of segments), but only 124 kilobytes of stack. Read about memory management. The call stack is often limited (e.g. to a few megabytes).

memory management and segmentation faults in modern day systems (Linux)

In modern-day operating systems, memory is available as an abstracted resource. A process is exposed to a virtual address space (which is independent from address space of all other processes) and a whole mechanism exists for mapping any virtual address to some actual physical address.
My doubt is:
If each process has its own address space, then it should be free to access any address in the same. So apart from permission restricted sections like that of .data, .bss, .text etc, one should be free to change value at any address. But this usually gives segmentation fault, why?
For acquiring the dynamic memory, we need to do a malloc. If the whole virtual space is made available to a process, then why can't it directly access it?
Different runs of a program results in different addresses for variables (both on stack and heap). Why is it so, when the environments for each run is same? Does it not affect the amount of addressable memory available for usage? (Does it have something to do with address space randomization?)
Some links on memory allocation (e.g. in heap).
The data available at different places is very confusing, as they talk about old and modern times, often not distinguishing between them. It would be helpful if someone could clarify the doubts while keeping modern systems in mind, say Linux.
Thanks.
Technically, the operating system is able to allocate any memory page on access, but there are important reasons why it shouldn't or can't:
different memory regions serve different purposes.
code. It can be read and executed, but shouldn't be written to.
literals (strings, const arrays). This memory is read-only and should be.
the heap. It can be read and written, but not executed.
the thread stack. There is no reason for two threads to access each other's stack, so the OS might as well forbid that. Moreover, the tread stack can be de-allocated when the tread ends.
memory-mapped files. Any changes to this region should affect a specific file. If the file is open for reading, the same memory page may be shared between processes because it's read-only.
the kernel space. Normally the application should not (or can not) access that region - only kernel code can. It's basically a scratch space for the kernel and it's shared between processes. The network buffer may reside there, so that it's always available for writes, no matter when the packet arrives.
...
The OS might assume that all unrecognised memory access is an attempt to allocate more heap space, but:
if an application touches the kernel memory from user code, it must be killed. On 32-bit Windows, all memory above 1<<31 (top bit set) or above 3<<30 (top two bits set) is kernel memory. You should not assume any unallocated memory region is in the user space.
if an application thinks about using a memory region but doesn't tell the OS, the OS may allocate something else to that memory (OS: sure, your file is at 0x12341234; App: but I wanted to store my data there). You could tell the OS by touching the end of your array (which is unreliable anyways), but it's easier to just call an OS function. It's just a good idea that the function call is "give me 10MB of heap", not "give me 10MB of heap starting at 0x12345678"
If the application allocates memory by using it then it typically does not de-allocate at all. This can be problematic as the OS still has to hold the unused pages (but the Java Virtual Machine does not de-allocate either, so hey).
Different runs of a program results in different addresses for variables
This is called memory layout randomisation and is used, alongside of proper permissions (stack space is not executable), to make buffer overflow attacks much more difficult. You can still kill the app, but not execute arbitrary code.
Some links on memory allocation (e.g. in heap).
Do you mean, what algorithm the allocator uses? The easiest algorithm is to always allocate at the soonest available position and link from each memory block to the next and store the flag if it's a free block or used block. More advanced algorithms always allocate blocks at the size of a power of two or a multiple of some fixed size to prevent memory fragmentation (lots of small free blocks) or link the blocks in a different structures to find a free block of sufficient size faster.
An even simpler approach is to never de-allocate and just point to the first (and only) free block and holds its size. If the remaining space is too small, throw it away and ask the OS for a new one.
There's nothing magical about memory allocators. All they do is to:
ask the OS for a large region and
partition it to smaller chunks
without
wasting too much space or
taking too long.
Anyways, the Wikipedia article about memory allocation is http://en.wikipedia.org/wiki/Memory_management .
One interesting algorithm is called "(binary) buddy blocks". It holds several pools of a power-of-two size and splits them recursively into smaller regions. Each region is then either fully allocated, fully free or split in two regions (buddies) that are not both fully free. If it's split, then one byte suffices to hold the size of the largest free block within this block.

When does the stack really overflow?

Is infinite recursion the only case or can it happen for other reasons?
Doesn't the stack size grow as needed same as heap?
Sorry if this question has been asked before, would appreciate links to them if that is the case.
I can't speak for all platforms, but as it happens, I've just spent some time working with Windows .exe files (I mean, actually studying the binary format of them - I know in a sense all of us here work with executable files ;) ). I'm betting that most other platforms have similar capabilities, but I'm not immediate familiar with them.
Part of the file format itself includes two values relevant to the current discussion:
typedef struct _IMAGE_OPTIONAL_HEADER {
...
DWORD SizeOfStackReserve;
DWORD SizeOfStackCommit;
...
} IMAGE_OPTIONAL_HEADER32, *PIMAGE_OPTIONAL_HEADER32;
From MSDN:
SizeOfStackReserve
The number of bytes to reserve for the
stack. Only the memory specified by
the SizeOfStackCommit member is
committed at load time; the rest is
made available one page at a time
until this reserve size is reached.
SizeOfStackCommit
The number of bytes to commit for the
stack.
In other words, the linker specifies a maximum size for the program's stack. If you hit the maximum size, you overflow - no matter how you hit the maximum size. You could write a simple program to do it in one line of code just by allocating a single stack variable (say, an array) that's bigger than the maximum stack size. Or you could do it via infinite (or finite, but very deep) recursion, or just by allocating too many stack variables.
The Microsoft linker sets this value to 1MB by default on X86 platforms (4MB on Itanium systems). This seems small on the face of it, for a modern system. However, more modern versions of Windows interpret these values slightly differently. Instead of completely limiting the stack, it limits the physical memory the stack will use. If your stack grows beyond this, virtual memory will get involved, so you should still be good... assuming you have enough virtual memory.
Remember, it is possible to run out of memory, even on modern systems with huge amounts of RAM and plenty of virtual memory on disk. You just need to allocate really big amounts of data.
So, long story short: is it possible to overflow the stack without infinite recursion? Definitely. Is it likely? Not really, unless you're allocating really huge objects.
The stack overflows when the stack pointer is pushed out of the memory block the operating system has allocated for the stack. Some operating systems will resize the stack as it grows (IIRC Linux does this) while in others the stack size is fixed at the start of the process or thread (IIRC Windows does this).
Possible reasons for overflowing the stack:
An unbounded number of stack frames (e.g. from unbounded recursion)
Attempting to allocate large blocks from the stack
Buffer overflows for buffers allocated on the stack
There are probably other reasons as well that I can't think of off the top of my head.
This question doesn't specify which stack is "the" stack. So, here are a few answers:
Call Stack
The call stack gets overflowed whenever the number of calls on the stack overruns the amount of memory it has. The most common way is infinite recursion, but it's quite possible to have recursion that's excessive but not infinite. For example, computing the Ackermann function naively will tax any computer.
Languages
Stack-based languages
Some languages, like Postscript and Forth, and some virtual machines, like the Java virtual machine, are stack-based. In these languages, it may be possible to make expressions so complex that they overflow the stack.
Context-free languages
Context-free languages are often implemented using a stack. If the strings for the code of these languages gets too complex, it's possible to overflow the stack.
On a laptop or desktop machine it may be unusual to overflow the stack without infinite (or very deeply nested) recursion when running from the main thread... however, stack overflows are not uncommon for:
Threaded code in which the thread has been allotted a small, fixed-sized stack.
Signal handling code in which the signal handling context has a small, fixed-sized stack.
Code executing on embedded devices, where memory is generally scarce.
As an example, if you register a signal handler using sigaction, if the signal handler does any complex (i.e. deeply nested operations) it is very easy to get a stack overflow on a number of operating systems, since signal handlers are usually allotted a small, fixed-sized stack. Similarly, if you spawn a thread with pthread_create, but you specify a small stacksize with pthread_attr_setstacksize, then it is very easy to attain a stack overflow. On very memory-limited devices such wireless sensors, it is an art to avoid stack overflows.
My day job involves a lot of work with LotusScript in Lotus Notes, which has fixed stack limits for various scopes. E.g. most variables in a procedure/function must fit in a 32kB stack, except that the content of class variables is stored on the heap.
If fixed-size variables exceed the stack size, code won't compile.
Run-time stack overflows can occur with recursion. This is easy to achieve in LotusScript as it limits recursion of any single function to a 32kB stack. I gave up on using a recursive QuickSort years ago because of this.
If your program exceeds its alloted stack space without any infinite recursion going on, then you're doing something wrong.
Though it can happen if you leave off some asterisks and try to pass some huge buffers by value.
The memory allocated for the stack does generally grow as needed within reasonable boundaries - I'm not sure what the upper limit is on various systems.

Resources