General Purpose Registers - stack

I am new to Computer Architecture.Can somebody help me in understanding the use of limited registers in processing of several complex applications. My question is there are fixed number of registers(For Example :: 80386 contains a total of sixteen registers) that are of interest to the applications programmer.
What happens if we want more registers( for example: to accommodate increased Stack size), are the addresses and data from registers written back to main memory ?.In multitasking environment, are the registers data and addresses of different applications moved from between main memory and back to register for processing ?
Does operating systems have special registers which does not interfere with application general purpose registers ?
And suggest any good resource for understanding such concepts for starters ?

Registers are the fastest memory in a computer. The instruction set of any particular cpu is written specifically for the register architecture. You are right that data/addresses must be backed to memory as more register space is used.
As far as a multitasking system goes, the scheduler generally has to save the execution context between tasks. This context involves the current state of the registers as well as other status bits (depending on the cpu).
A good first step would be to learn assembly programming. It is so close to the hardware that you will learn all of this stuff thoroughly. Once you have that, pick up an operating systems book to see how it is done at a higher level. Depending on your commitment (and curiosity), you could also read some of the source code for smaller real-time operating systems, such as FreeRTOS. Reading up on 8-bit microcontroller architectures is also nice, since they are simple. For example, AVR or HC08 are pretty straightforward architectures to learn. All of the info is free; you just have to read it.
Enjoy.

Related

Is the "4GB patch" of any use in real life?

And if so, how. I'm talking about this 4GB Patch.
On the face of it, it seems like a pretty nifty idea: on Windows, each 32-bit application normally only has access to 2GB of address space, but if you have 64-bit Windows, you can enable a little flag to allow a 32-bit application to access the full 4GB. The page gives some examples of applications that might benefit from it.
HOWEVER, most applications seem to assume that memory allocation is always successful. Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers. With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
So, have I made some incorrect assumptions? If not, how does this tool help in practice? I don't see how it could, yet I see quite a few people around the internet claiming it works (for some definition of works).
Your troublesome assumptions are these ones:
Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers.
There do exist applications that do better than "quit gracefully" on failure. Yes, functionality will be impacted (after all, there wasn't enough memory to continue with the requested operation), but many apps will at least be able to stay running - so, for example, you may not be able to add any more text to your enormous document, but you can at least save the document in its current state (or make it smaller, etc.)
With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
The trouble with this assumption is that, in general, an application's memory usage is determined by what you do with it. So, as over the past years storage sizes have grown, and memory sizes have grown, the sizes of files that people want to operate on have also grown - so an application that worked fine when 1GB files were unheard of may struggle now that (for example) high definition video can be taken by many consumer cameras.
Putting that another way: applications that used to fit comfortably within 2GB of memory no longer do, because people want do do more with them now.
I do think the following extract from your link of 4 GB Patch pretty much explains the reason of how and why it works.
Why things are this way on x64 is easy to explain. On x86 applications have 2GB of virtual memory out of 4GB (the other 2GB are reserved for the system). On x64 these two other GB can now be accessed by 32bit applications. In order to achieve this, a flag has to be set in the file's internal format. This is, of course, very easy for insiders who do it every day with the CFF Explorer. This tool was written because not everybody is an insider, and most probably a lot of people don't even know that this can be achieved. Even I wouldn't have written this tool if someone didn't explicitly ask me to.
And to expand on CFF,
The CFF Explorer was designed to make PE editing as easy as possible,
but without losing sight on the portable executable's internal
structure. This application includes a series of tools which might
help not only reverse engineers but also programmers. It offers a
multi-file environment and a switchable interface.
And to quote a Microsoft insider, Larry Miller of Microsoft MCSA on a blog post about patching games using the tool,
Under 32 bit windows an application has access to 2GB of VIRTUAL
memory space. 64 bit Windows makes 4GB available to applications.
Without the change mentioned an application will only be able to
access 2GB.
This was not an arbitrary restriction. Most 32 bit applications simply
can not cope with a larger than 2GB address space. The switch
mentioned indicates to the system that it is able to cope. If this
switch is manually set most 32 bit applications will crash in 64 bit
environment.
In some cases the switch may be useful. But don't be surprised if it
crashes.
And finally to add from MSDN - Migrating 32-bit Managed Code to 64-bit,
There is also information in the PE that tells the Windows loader if
the assembly is targeted for a specific architecture. This additional
information ensures that assemblies targeted for a particular
architecture are not loaded in a different one. The C#, Visual Basic
.NET, and C++ Whidbey compilers let you set the appropriate flags in
the PE header. For example, C# and THIRD have a /platform:{anycpu,
x86, Itanium, x64} compiler option.
Note: While it is technically possible to modify the flags in the PE header of an assembly after it has been compiled, Microsoft does not recommend doing this.
Finally to answer your question - how does this tool help in practice?
Since you have malloc in your tags, I believe you are working on unmanaged memory. This patch would mostly result in invalid pointers as they become twice the size now, and almost all other primitive datatypes would be scaled by a factor of 2X.
But for managed code since all these are handled by the CLR in .NET, this would mean really helpful and would not have much problems unless you are dealing with any of the following :
Invoking platform APIs via p/invoke
Invoking COM objects
Making use of unsafe code
Using marshaling as a mechanism for sharing information
Using serialization as a way of persisting state
To summarize, being a programmer I would not use the tool to convert my application and rather would migrate it myself by changing build targets. being said that if I have a exe that can do well like games with more RAM, then this is worth a try.

Is it possible/easy to determine how much power a program is using?

Is it possible to determine or even reasonably estimate how much power a program is using? The idea being to profile my code in terms of power consumption instead of just typical performance.
Is it enough to measure CPU use, GPU use and memory access?
There are many aspects that can influence the power consumption of an application, and these will vary a lot depending on the used hardware.
The easiest way to get an idea is to measure it. If your program is doing heavy calculations, it is quite simple to measure the difference. Just read out the usage while the app is running, and subtract the usage while not.
If your app is not of the heavy calculations kind, then the challenge is allot bigger, since a simple 1 point in time comparison will not do the trick. You could get a measuring device that can log usage over time, which log you would need to compare with the logged process activity of your machine and try to filter all other scheduled tasks (checks for updates etc) out.
Just a tip if you want to go this way, APC's UPS's come with this functionality built-in, and the PowerChute software stores a power consumption log in an Access Database (C:\Program Files\APC\PowerChute Personal Edition\EnergyLog.mdb). I'm not sure if this is true for all models, but it was a nice extra feature that came with mine (Pro 550). I would lay the data alongside an Xperf trace (the free built in profiler in Windows, look here for an overview), in order to correlate power variations with your applications activity, and filter out scheduled jobs etc...
That said, remember you will get different results on different hardware. An ssd will differ from a tradational harddisk, and the used grafix adapter can also make a difference, so you could only get a rough estimation overall, by measuring on a "typical" system. Desktop systems will consume allot more than laptops, etc... (also see this blogpost).
Power consumption profiling tools are allot more common for mobile devices. I'm not an expert in that field, but I know there are quite some tools out there.
There is a project that allows you to get power consumption of a given program of PID on Linux, requiring no hardware: scaphandre
With that you could create grafana dashboards to follow the power consumption of a given project from one release to another. Examples can be seen here.

What is the significance of NCSU's double floating gate FET technology for the future of OS design?

this is my first post, and I certainly hope that it's appropriate to the forum - I've been lurking for some time, and I believe that it is, but my apologies if this is not the case.
I presume that most if not all readers here have encountered the story regarding the discovery (popularization?) of "double floating gate FETs" by NCSU, which is being hailed as a potential "universal memory."
http://www.engadget.com/2011/01/23/scientists-build-double-floating-gate-fet-believe-it-could-revo/
I've been slowly digesting a wide range of material related to software development, including OS design (I am, perhaps obviously, strictly amateur), and given that one of the most basic functions that any OS performs is managing the ever-present exchange of data between perpetual storage and volatile memory, it strikes me that if this technology matures and becomes widely available, it would represent such a sea-change in the role of the OS that it would almost necessitate rewriting our operating systems from the ground up to make full use of its potential. Am I correct in my estimation of the role of the OS, and in the potential ramifications of the new technology? Or, have I perhaps failed to understand some critical distinction regarding the logical (as opposed to physical) relationships between processor, memory, and storage?
Until we know what it'll cost in mass production, it's impossible to say. Unless it can be made cheaply enough to replace hard drives, so you erase the distinction between "main memory" and "long term storage", the impact will be minimal. I'd be surprised to see that happen though.
Even if economics allowed that to happen, I doubt it really would. Some mobile devices already use identical (battery backed RAM) for main memory and long term storage. This would eliminate the battery-backing for the memory, but doesn't seem likely to impact OS design at all. Since (at least most) use the same battery for normal operation and maintaining long-term storage, this would simply mean and extra 10% (or whatever) life from the same battery -- but only if its normal operation required about the same power as normal RAM (which strikes me as unlikely -- most flash draws extra power when writing).

Evaluate software minimum requirements

Is there a way to evaluate the minimum requirements of a software? I mean, how can I discover, for example, the minimum amount of RAM that my application will need?
Thanks!
A profiler will not help you here. Neither will estimating the size of data structures.
A profiler can certainly tell you where your code is spending the most CPU time, but it will not tell you if you are missing performance targets - e.g. if your users will be happy, or unhappy with the performance of your application on any given system.
Simply computing the size of data structures, and how many may be allocated at any one time will not at all give you an accurate picture of memory usage over time. The reason is that memory usage is determined by many other factors including how much I/O your application does, what OS services your application uses, and most importantly the temporal nature of how your application uses memory.
The most effective way to understand minimum requirements is to
Make sure you have an effective way of measuring performance using metrics that are important to your user. the best metric is response time. Depending on your app, a rate such as throughput or operations per second may be applicable. Your measurements could be empirical (e.g. just try it) but that is least effective. This is best done with some kind of instrumentation. On windows, the choice is [ETW][1]. Other operating systems have other suitable mechanisms.
Have some kind of automated method of exercising your application. This will let you make repeated and reliable measurements.
Measure your application using various memory sizes and see where performance begins to suffer. This may also expose performance bugs that prevent your application from performing well. If you have access to platforms of various performance levels, use those as well. You didn't indicate what your app does, but testing on a netbook with 1GB of memory is great for many (not all) client applications.
You can do the same with the CPU and other components such as disk, networking or the GPU.
Also note that there is no simple answer here - doing an effective job at setting minimum requirements is real work. This is especially true if your application is participatory sensitive to one platform aspect or another.
There are other factors as well - for example, your app may run fine in one configuration until the user opens another application that may be memory hungry or a CPU pig. Users rarely only have one application open.
This means that in addition to specifying minimum requirements you must do an effective job in setting user expectations - that is explaining when your application will perform well, and when it won't, and what the factors are that impact performance.
[1]: http://msdn.microsoft.com/en-us/library/ms751538.aspxstrong text
Ideally, you'd decide on the minimum requirements of a piece of software based on your target audience, and then test your software during development on that configuration to ensure it delivers a satisfactory experience.
You can look at a system running your software and see how much memory is being consumed by your application, and use that to guide how much memory is being consumed. CPU is a little bit more complex - you could try to model your CPU requirements, but doing this accurately can be challenging.
But ultimately, you need to test your app on the base system you are targeting.
Given the data structures used by the application, estimate how much space they will take up in normal use. Using that estimation, set up a number of machines (virtual or physical) to test the estimate in different scenarios (i.e. different target operating systems, different virtual memory settings, etc).
Then measure the performance of the application in the different scenarios. Your minimum settings will be the machine that performs the least adequately while still being acceptable.
You could try using a performance profiler on your software while stress testing it.
You could use virtualization to repeatedly run a representative test suite with different amounts of RAM in the virtual machine...when the performance falls below acceptable levels due to swapping, you've found the memory requirement.

Datamining models in FORTRAN or C (or managed code)?

We are planning to develop a datamining package for windows. The program core / calculation engine will be developed in F# with GUI stuff / DB bindings etc done in C# and F#.
However, we have not yet decided on the model implementations. Since we need high performance, we probably can't use managed code here (any objections here?). The question is, is it reasonable to develop the models in FORTRAN or should we stick to C (or maybe C++). We are looking into using OpenCL at some point for suitable models - it feels funny having to go from managed code -> FORTRAN -> C -> OpenCL invocation for these situations.
Any recommendations?
F# compiles to the CLR, which has a just-in-time compiler. It's a dialect of ML, which is strongly typed, allowing all of the nice optimisations that go with that type of architecture; this means you will probably get reasonable performance from F#. For comparison, you could also try porting your code to OCaml (IIRC this compiles to native code) and see if that makes a material difference.
If it really is too slow then see how far that scaling hardware will get you. With the performance available through a modern PC or server it seems unlikely that you would need to go to anything exotic unless you are working with truly brobdinagian data sets. Users with smaller data sets may well be OK on an ordinary PC.
Workstations give you perhaps an order of magnitude more capacity than a standard dekstop PC. A high-end workstation like a HP Z800 or XW9400 (similar kit is available from several other manufacturers) can take two 4 or 6 core CPU chips, tens of gigabytes of RAM (up to 192GB in some cases) and has various options for high-speed I/O like SAS disks, external disk arrays or SSDs. This type of hardware is expensive but may be cheaper than a large body of programmer time. Your existing desktop support infrastructure shouldn be able to this sort of kit. The most likely problem is compatibility issues running 32 bit software on a 64-bit O/S. In this case you have various options like VMs or KVM switches to work around the compatibility issues.
The next step up is a 4 or 8 socket server. Fairly ordinary wintel servers go up to 8 sockets (32-48 cores) and perhaps 512GB of RAM - without having to move off the Wintel platform. This gives you fairly wide range of options within your platform of choice before you have to go to anything exotic1.
Finally, if you can't make it run quickly in F#, validate the F# prototype and build a C implementation using the F# prototype as a control. If that's still not fast enough you've got problems.
If your application can be structured in a way that suits the platform then you could look at a more exotic platform. Depending on what will work with your application, you might be able to host it on a cluster, cloud provider or build the core engine on a GPU, Cell processor or FPGA. However, in doing this you're getting into (quite substantial) additional costs and exotic dependencies that might cause support issues. You will probably also have to bring a third-party consultant who knows how to program the platform.
After all that, the best advice is: suck it and see. If you're comfortable with F# you should be able to prototype your application fairly quickly. See how fast it runs and don't worry too much about performance until you have some clear indication that it really will be an issue. Remember, Knuth said that premature optimisation is the root of all evil about 97% of the time. Keep a weather eye out for issues and re-evaluate your strategy if you think performance really will cause trouble.
Edit: If you want to make a packaged application then you will probably be more performance-sensitive than otherwise. In this case performance will probably become an issue sooner than it would with a bespoke system. However, this doesn't affect the basic 'suck it and see' principle.
For example, at the risk of starting a game of buzzword bingo, if your application can be parallelized and made to work on a shared-nothing architecture you might see if one of the cloud server providers [ducks] could be induced to host it. An appropriate front-end could be built to run locally or through a browser. However, on this type of architecture the internet connection to the data source becomes a bottleneck. If you have large data sets then uploading these to the service provider becomes a problem. It may be quicker to process a large dataset locally than to upload it through an internet connection.
I would advise not to bother with optimizations yet. First try to get a working prototype, then find out where computation time is spent. You can probably move the biggest bottlenecks out into C or Fortran when and if needed -- then see how much difference it makes.
As they say, often 90% of the computation is spent in 10% of the code.

Resources