Everytime I look in the compiler settings, the same question comes into my mind: Why does the current compiler of Delphi still have the "Pentium-safe FDIV" compiler option?
The Pentium-FDIV-Bug was found in November 1994 and did not occur in CPU models of 1995. Processors at this time were probably only strong enough work with Windows 95, 98 and maybe also Me. As far as I know, the first Intel Pentium 1 CPU with 133 MHz (and therefore fast enough to reach the minimum system requirements of Windows 2000) was released in June 1995, without FDIV bug, of course.
The VCL/RTL of current Delphi versions makes use of Windows APIs which are not available in ancient Operating Systems. Windows 98 and Me do not work with an empty Delphi XE6 VCL application; I didn't check if the VCL/RTL of Delphi XE6 has already broken Windows 2000 compatibility, but I think so.
So, why does Embarcadero keep a compiler switch which was used in 1994, when they drop support for Operating Systems which were used in year 2000? Ergo, nobody will require this compile option, since the affected CPUs won't be compatible with the Operating Systems the VCL/RTL requires anyway.
Update; To clarify the question: Is there any use-case where this switch might be useful? Or maybe the compiler internally ignores the option, and it is just there to preserve the options for old project files?
The Pentium divide bug affected a number of the very early Pentium models. The highest clock speed of the affected models was 100MHz. The official documents indicate that Delphi XE6 targets Vista and up, but in reality it is still possible to target Windows XP, and I believe that XE6 can make executables that run on Windows 2000. The minimum requirements for XP are a 233MHz processor, and for Windows 2000 a 133MHz processor.
So, it is just about plausible that you might be able to run code compiled by XE6 on a defective Pentium processor. In reality, nobody at Embarcadero, for at least the past 15 years, will have felt any obligation to support defective Pentium processors. They simply have not been seen in the wild in the 21st Century.
So, why has the compiler feature not been removed? Only Embarcadero know the answer to that but I can give a few obvious reasons:
Removing the feature disturbs backwards compatibility. If there are people that, even if they don't need to, build with the switch enabled, then removing it will affect them.
Removing the feature costs. Doing so involves changes to the UI, the compiler, the documentation, the test suite, etc.
Removing the feature introduces risk. Any time you change code, code that is tried and tested over many years, you run the risk of introducing a new defect.
If you're looking for an "official" answer, you're unlikely to get one. Unofficially, I can certainly point you to David's answer. His 3 points about the reasoning are clear. Unless there is a compelling reason to remove the feature, it doesn't make much business or technical sense. About the only reason it should be removed would be if the x86 back-end were completely rewritten from the ground up... in which case it's not so much as removed, rather it simply would not be taken into account in the first place.
I will note that the new AMD64/ia64 and LLVM-based ARM back-ends do not do anything with that directive since it should be axiomatic that those CPUs aren't effected. The directive/option is recognized, but merely ignored.
Delphi still alows making of console applications. And unless your console application relies on newer windows API you could easily make it compatible with Windows 95.
This is just hearsay, but apparently the Delphi compiler is written in assembly and basically unmaintainable. They don't remove any features simply because it's incredibly hard to do any kind of change in the compiler so they don't bother.
Related
I used to use Pythia to obfuscate my D6 program. But it seems Pythia does not work anymore with my D2007.
Here's the link of Pythia (no update since early 2007) : http://www.the-interweb.com/serendipity/index.php?/archives/86-Pythia-1.1.html
From link above, here's what I want to achieve
Over the course of time, a lot of new language features were added.
Since there is no formal grammar available, it is very hard for tool vendors (including Embarcadero themselves) to keep their Delphi language parsers up at the same level as the Delphi Compiler.
It is one of the reasons it takes tool vendors a bit of time (and for Delphi generics support: a lot of time!) to update their tools, of they are update at all.
You even see artifacts of this in Delphi itself:
the structure pane often gets things wrong
the Delphi modelling and refactoring sometimes fails
the Delphi code formatter goes haywire
Pythia is the only obfuscator for the native Delphi language I know of.
You could ask them on their site if they plan for a newer version.
Personally, I almost never use obfuscators for these reasons:
reverse engineering non-obfuscated projects is difficult enough (it would take competitors long enough to reverse engineer, so the chance to lessen the backlog they already have in the first place is virtually zero)
their added value is limited when you have multi-project solutions (basically they only hide internal or private stuff)
they make bug hunting production code far too cumbersome
--jeroen
You may try UPX - Ultimate Packer for Executable). It will compress the resources and all the text entries are non-readable without de-compress first.
I don't know any good free solutions, but if you really need some protection you can always buy something like:
http://www.aspack.com/asprotect.html
or
http://www.oreans.com/themida.php
The Internet is full of developers requesting a 64bit Delphi, and users of Delphi software requesting 64 versions.
delphi 32bit : 1.470.000 pages
delphi 64bit : 2.540.000 pages :-)
That's why I've been wondering why Embarcadero still doesn't offer such a version.
If it was easy to do, I'm sure it would've been done a long time ago already. So what exactly are the technical difficulties that Embarcedero need to overcome?
Is it the compiler, the RTL/VCL, or the IDE/Debugger?
Why is the switch from 32bit to 64bit more complicated than it was for Borland to switch from 16bit to 32bit?
Did the FPC team face similar problems when they added 64bit support?
Am I overseeing something important when I think that creating a 64bit Delphi should be easier than Kylix or Delphi.Net?
For things I had read in forums, I think the main delay was that the compiler for 32-bits could not be adapted to 64-bits easily at all, so they had to write a new compiler with a structure that allows porting it to new platforms easily. That delay is pretty easy to understand in that field.
And the first thing the new compiler had to do is to support the current 32-bit Windows before targeting it for 64-bit, so that extra-delay is also easy to understand.
Now, in the road to the 64-bit support, Embarcadero decided to target 32-bit MacOSx, and that decision is something that some people don't understand at all. Personally I think it's a good marketing decision for the Embarcadero business point of view (wait, I'm not saying 64-bit support is less important, read carefully, I'm not talking about importance but about commerciality). That is a very controversial extra delay in the road to 64-bits (besides Embarcadero says that they have teams working in parallel, in the facts there is a delay, at least for versioning issues -marketing again-).
Also is good to have in mind FPC is not a commercial product, so they can add/remove features more easily than Delphi.
If it weren't for the limitation on shell extensions (I have one that loads into Windows Explorer), I'd probably never care about 64. But due to this limitation, I need it, and I need it now. So I'll probably have to develop that part in Free Pascal. It's a pity, as aside from this, there are few apps that would actually benefit from 64. IMO, most users are either drinking the coolaid, or are angry about having been duped into buying something that sounded great but turned into a headache. I know a guy who is happy to run Win7/64 so he has enough RAM to run a full copy of XP in a VM, which he wouldn't need if he'd gotten Win7/32 like I told him too. :-<
I think everyone has been duped by the HW manufacturers, particularly the RAM dealers who would otherwise have a very soft market.
Anyway, back to the question at hand... I'm caught between a rock and a hard place. My customers are placing demands on me, due to an architecture decision from M$ (not allowing 32-bit DLLs in Windows Explorer) and perception issues (64-bit must be twice as good as 32, or maybe 32 has to run on the "penalty core" or something). So I'm being driven by a largely "artificial" motivation. And therefore, I must project that onto Embarcadero. But in the end, the need for 64-bit support in Delphi is IMO, mostly based on BS. But they're going to have to respond to it, as will I.
I guess the closest I've seen to an "answer" to your question from Embarcadero's point of view is summarised in this article on the future of the Delphi compiler by Nick Hodges.
The real issues are not technical. Borland/CodeGear first, then Embarcadero, show they do not like to mantain more than one Windows version of Delphi. They delayed the Unicode switch until they could drop Ansi OS support wholly. Actually they would need to support both a Win32 compiler/library and a 64 bit compiler/library because there are a mix of 32 and 64 bit Windows OS used. I believe they are trying to delay it as much as possible to avoid to mantain the 32 bit ones as much as they could.
Delphi compiler became pretty old and difficul to mantain, but they decide to rewrite it aiming at non-Windows OSes, and I am sure the driver was to port some Embarcadero database tools to non-Windows 32 bit platforms, ignoring Delphi customers' actual needs, and delaying again the 64 bit compiler and library in a cross-platform attempt made again trying to cut corners to deliver it quickly, and thereby doomed to fail once more.
Stubbornly, they do not want to switch to a two year release cycle because they want fresh cash each year, notwithstanding it becomes increasingly difficult to release real well done improvements in a so short cycle - and it took almost four years to fix issues introduced with Delphi 2005 changes. In turn they have to put developers to work on bringing in "minor" improvements to justify upgrades, instead of working on 64 bit stuff fully.
I have heard that they wanted a complete rewrite of the compiler without losing the backward compatibility. Considering that there is no full syntax description of the language (I have asked, but they haven't so I made my own available to the general public). I can Imagine that the documentation is not as complete as they wanted it. So they are probably trying to reverse engineer their own code.
I'm a strong supporter of Delphi, and I think 2009 and 2010 are great releases (comparable with the rock solid #7). But the lack of 64 bit support is going to kill them in the end.
Back to the question, the biggest problem should be the compiler. The switch from 16 to 32 bit was easier because there was less legacy (delphi 2 was 32 bit and the Object Pascal language was a lot simpeler than it is now.)
I'm not sure about Free pascal. Maybe their code was easy to port. Maybe they lost some backward compatibility. To be sure, you can ask them.
I know you are asking for the technical issues but I guess the marketing department might be the biggest issue... I am sure they get far more excited from the prospect of new markets that bring new customers an thus manage to shift priorities. A problem (in my opinion) is the bad track record: we have seen kylix and delphi.net in the past that were both ehm kylixed. I can imagine that new customers will wait and see if it's around to stay and that in turn might decide embarcadero to leave it prematurely.
That said: there are certainly some issues and design considerations for x64 and I just hope that the Embarcadero team will share it's thoughts about them and discuss with the community (to prevent rants as we've had about the unicode change).
There already is a 64-bit Delphi (Object Pascal). It's called Free Pascal. So while I have no doubt it's hard, it's not "so hard" that it's not feasible. Of course, I can't speculate about Embarcedero.
Allen Bauer from Embarcadero also said recently that they had to implement exception support completely differently for 64-bit "due to differences in the exception ABI on Win64".
Right now I plan to test on 32-bit, 64-bit, Windows XP Home, Windows XP Pro, Windows Vista Home Basic, Windows Vista Ultimate, Windows 7 Home Basic, and Windows 7 Ultimate ... all with the latest service pack.
However, now I'm wondering if it's worthwhile to test on both AMD and Intel for all the listed scenarios above or would it be a waste of time?
Note: this is a security application for everyday average users.
My feeling is that this would only be worthwhile if you had lots of on-the-edge hand-coded assembly language or some kind of incredibly tight timings (which you're not going to meet with that selection of OS anyway).
If you're using off-the-shelf commercial compilers, then you can be reasonably sure they're going to generate code which runs on all the normal processors.
Of course, nobody could ever prove they didn't need to test on a particular platform, but I would think there are bigger causes of platform difference to worry about than CPU brand (all the various multi-core/hyperthreading permutations, for example, which might expose all your multithreaded code bugs in different ways)
Only if you're programming in assembly and use extended, vender specific instruction sets. But since AMD and Intel have cross-licensing agreements in place, this is more of an historic issue than a current one.
In every other case (e.g. using a high level language) it's the job of the compiler writers to ensure the code is x86 compliant and runs on every CPU.
Oh, and except the FDIV Bug Processor vendors usually don't do mistakes.
I think you're looking in the wrong direction for testing scenarios.
Yes, it's possible that your code will work on Intel but not on AMD, or in Windows Vista Home but not in Windows Vista Professional. But unless you're doing something very closely tied to low-level programming in the first case, or to details of OS implementation in the second, the odds are small. You could say that it never hurts to test every conceivable scenario. But in real life there must be some limit on the resources available to you for testing. Testing on different processors or different OS's is, in most cases, not testing YOUR program, it's testing the compiler, the OS, or the processor. How much time do you have to spare to test other people's work? I think your time would be better spent testing more scenarios within your own code. You don't give much detail on just what your app does, but just to take one of my own examples, it would be much more productive to spend a day testing selling products our own company makes versus products we resell from other manufacturers, or testing sales tax rules for different states, or whatever.
In practice, I rarely even test deploying on Windows versus deploying on Linux, never mind different versions of Windows, and I rarely get burned on that.
If I was writing low-level device drivers or some such, that would be a different story. But normal apps? Don't waste your time.
Certainly sounds like it would be a waste of time to me - which language(s) are your programs written in?
I'd say no. Unless you are writing your application in assembler, you should be far enough removed from the processor to not need to worry about differences. The processors will support the Windows OS whose API's are what you are interefacing with(depending on the language). If you are using .NET the ONLY forseeable issue you will have is if you are using a version of the framework that those platforms don't support. Given that they are all XP or later you should be fine. If you want to worry about something make sure your application will play nicely with the Vista and later security model.
The question is probably "what are you testing". It is unlikely that any of the test is testing something that would be potentially different between AMD and Intel hardware platforms. Differences could be expected at driver level, but you do not seems to plane testing your software for every existing bit of PC hardware available around. Most probably there would be much more differences between different levels of windows service pack than between AMD and Intel processors.
I suppose it's possible there is some functionality in your code that (whether you know it or not) takes advantage of some processing/optimization in one or the other that could have a serious effect on the outcome. Keyword possible.
I would say in general you're unlikely to have to worry about it. If you're going to do it on multiple machines anyway, mix it up on them. But I wouldn't stress out about it.
I would never run all of my regression tests on both AMD and Intel unless I had specifically fixed an issue unique to one either one. That is what regression testing is.
Unit testing on the other hand... I wouldn't anticipate any difference. So again, I wouldn't bother running unit tests on both until I had actually seen an issue specific to either AMD or Intel.
If you rely on accurate / consistent floating point results, then yes, definitely.
We have a medium-to-large size application. One version runs on Delphi 6 and another one on Delphi 2006.
One argument would be support for Unicode. We need that to cater to Customers around the world.
Other things I have read about are: better IDE (stability, speed), better Help and some cool additions to the language (e.g.: generics)
What about third-party components? We use DevExpress, DBISAM and many others. Are these already ported?
Touch/Gestures sound cool, but we have no use for that in our application.
Better theme support (eg., TStringGrid/TDBGrid now support themes).
Support for Windows Vista and Windows 7, including support for the Direct2D Canvas in Win7 and the Touch/Gesture support you mentioned.
Improved refactoring, including support for refactoring generics.
Built-in source code formatter.
IDE Insight allows you to find things in the IDE itself.
Enhanced RTTI.
Improvements in the debugger, including new custom data visualizers and the ability to create your own. There are two included with source (one for TDateTime and one for TStringList). Also better support for debugging threads, including the ability to name threads for debugging and set breakpoints on specific threads.
The ability to add version control support to the IDE via interfaces. This will allow version control developers to add support directly in the IDE itself.
The help is much better than in previous versions. It's been completely redesigned again, and is much more comprehensive and complete. There's also an online wiki-based version (used to generate the help itself) that you can add or edit.
Background compilation allows you to continue working while you're compiling your project.
As far as third party controls, that's up to the specific vendor; you'll have to check to see if Delphi 2010 versions are available for each of them individually. (You might check the Embarcadero web site, though, to see if they have a list already available; I seem to recall hearing of one... Ah, yes. Here it is. )
Last upgrade for old version
With old version of Delphi (before Delphi 2005), you have only before january 1 2010 to upgrade.
After you will have to buy a full version.
Productivity
http://www.tmssoftware.com/site/blog.asp?post=127
Purely as a reactive measure. Lets say that there is a new feature in the latest version of a yet to be released operating system. Lets say that this feature breaks certain features inside your application. IF there was to be a global fix for it, it would most likely not be placed in older versions of the compiler, but the newer versions which "officially" support the new operating system. The largest problem about waiting too long is that when such a measure is needed its generally at the zero hour when sales are at risk.
Upgrade NOW, and help prepare your application to be more reactive to future changes.
Don't convince him for a Delphi 2009/2010 upgrade, Do it for a Software Assurance.
The refactoring tools and overall
speed and stability of the IDE will
make the development team more
productive.
Working with the latest tools will make it easier to recruit top talent.
The IDE is definitely a step up from Delphi 6 and/or Delphi 2006.
If Unicode is important to your customers then Delphi 2009/2010 is a clear option. But if Unicode is important to you, rather than your customers, then I'd be careful.
Unicode is not "free". If your users/customers have concerns w.r.t memory footprint and/or performance, and/or your application involves extensive string handling, then Unicode exacts a price that all your customers will have to pay, and for customers who are not themselves concerned with Unicode support, that price comes with zero benefit (to them).
Similarly if your application sits on top of a currently non-Unicode enabled database schema. Migrating existing databases from non-Unicode to Unicode is non-trivial, and if you have customers with large production databases, incurring downtime for those customers whilst they migrate their data stores is something you should consider carefully.
Also you will need to be very aware of any interfaces to external systems - your code will unilaterally "go Unicode", and that may adversely impact on external interfaces to other systems that are not.
In such cases you would do well to tie the transition to Unicode with other compelling feature improvements and benefits to make the transition compelling for other reasons.
Also, if you genuinely have customers with a real need for true Unicode, then the transition is not as simple as recompiling with the latest/greatest compiler and VCL. True Unicode support will involve a great deal more work in your application code than you might at first appreciate.
Of course, having a Unicode capable compiler/VCL is a crucial component, but it's not an answer on it's own.
The Unicode change has a significant impact on 3rd party components. Even if you have the source to your 3rd party code you may find yourself facing Unicode issues in that code unless the vendor has taken steps to update that code in a more current version. Most current vendor libraries are Unicode by now though I think, so unless you are using a library that is no longer supported by the vendor, you should be OK on that score.
I would also exercise caution when it comes to those "cool" language features such as generics. They sure do look cool, but they have some seriously limiting characteristics that you will run into outside of feature demonstrations and can result in maintenance and debugging difficulties as the experience of the community in working with them is limited, so "best practice" has yet to emerge and the tool support perhaps hasn't yet caught up with the uses to which those features are being put in actual code.
Having said ALL that.... Since you cannot realistically choose any version other than Delphi 2010 to upgrade to, then if you are going to upgrade at all then you have to bite the Unicode bullet and will find yourself presented with lots of tempting language features to tinker with and distract you. ;)
And now that Embarcadero are imposing a more draconian policy w.r.t qualifying upgrade products, you will have to get off of Delphi 2006 if you wish to qualify for upgrade pricing for Delphi 20*11* onward, whether you decide that 2010 is right for you or not, otherwise when the time comes to upgrade to Delphi 2011 you will find yourself treated as a new customer, and if you thought that upgrade pricing was steep, check out the new user license costs!
D2006 was an awful version of Delphi. It's worth upgrading just to be rid of all the memory leaks and random IDE crashes and glitches. Justify it to the boss as a massive decrease in lost productivity. That means less money wasted paying you to not produce code because your dev tools aren't working. It'll pay for itself very quickly on that basis alone.
As for D6 vs. D2010, that's a feature argument. Start with Skamradt's response, that it helps your code be future-proof. Underscore it with OS compatibility. D2007 was the first version that understands Vista. D2010 is the first version to understand Windows 7. If you're compiling with any older version, your app is obsolete before you even deploy it because there's no guarantee it's compatible with modern versions of Windows.
Then you've got actual language features. The main improvements IMO from 2006 to 2010 are Generics, which helps out with all sorts of repetitive tasks, and extended RTTI. Robert Love has been doing some great blog posts lately on how the extended RTTI can simplify common real-world problems. (Plus Unicode, of course.)
Playing the devils advocate, there may be reasons not to upgrade. For instance you might be missing the source to certain components or you may still need to support Win9X.
I think you'll probably find the best reason to upgrade (leaving all the new wizz-bang features aside) is that you'll be significantly more productive in the new IDE. If you don't / can't upgrade I'd recommend grabbing a copy of Castalia, which can give you access to many productivity enhancements (e.g. refactoring) in Delphi 6.
DBISAM is updated, I just emailed them this past week concerning a project I hope to be upgrading from Delphi 3 to Delphi 2010.
All the other packages I looked into upgrading for that project (WPTools, Infopower, TMS) all state on their websites that they offer compatibility with 2010.
I never had D2006 (I have 2007) so I can't speak to any defects in that particular release (D2007 isn't that great, either) but it's generally a good principal to keep your tools in good shape. For a saw that means sharp, for software that means current. Especially in a new-OS year, you probably want the corresponding version of your primary development environment.
It seems to me there are 2 aspects in developing professional applications:
You want to earn money: you have to stick to your customer's demands, keep your stuff KISS, maintainable and so on... You have to be productive: no matter of generics, RTTI, widgets like flowpannel, gesture and so on because it takes time to learn and more time to use. In this way, change from D7 to D2010 is not nessary relevant. Change for another IDE like REAL Basic allowing multiplaform target is more accurate.
BUT as a developer there is a child and a poet in you, fascinated by new technologies or/and algorithms... This is the creative part of the job. You got to be creative if you want to be impressive and innovator. Upgrade to Delphi 2010 is a must have, searching for new classes, new objects is a way of life in today's programming.
That's my humble point of view and the reason that keep me spend my money to upgrade Delphi from I to 2010.
Best regards,
Didier
Lists of compatible components that already support Delphi 2010 including DevExpress (article will be periodically updated from our technology partner database) is at
http://edn.embarcadero.com/article/39864
Argument - tens of thousands of tools and components available for the things you might need in addition to the open api(s) for components and the IDE.
Item 9 of the The Joel Test: 12 Steps to Better Code is:
Do you use the best tools money can buy?
Perhaps this argument is germane here.
On the other hand if you are maintaining legacy code and not generating anything that has dependency on new OS or tool features, it is a hard argument to win. I would not however recommend generating entirely new projects on tools that old.
Unicode has been supported on Windows since at least NT 4.0, and for Windows 95/98/Me since the addition of MSLU in 2001 - so surely Delphi 2006 supports it!? [edit]Not fully supported in the component library it seems.[/edit]
I suggest that the one compelling argument is in order to ensure Vista and Windows 7 compatibility. I understand that 64bit target support was planned for Delphi this year. That may be another argument; but again it only applies if you actually intend to target such a platform, and in a way that will give a tangible benefit over 32bit code. [edit]I emphasised planned because I did no know whether it had made it into the product, but that it might be a consideration for you. It seems it has not, so the argument you put to management might be even less strong.[/edit]
Management are not going to be impressed by the "I just want cool tools to play with", you have to approach it on a "Return on Investment" (ROI) basis. Will you get your product out faster or cheaper using this tool? Are the existing tools a technical barrier to progress? Conversely, consider whether spending time porting your legacy code to new tools (with the associated validation and testing) will kill your budgets and deadlines for no commercial advantage?
The same way DOS morphed into Windows?
We seem to have ended up supporting and developing for three platforms from Microsoft, and I'm not sure where the boundaries are supposed to lie.
Why can't the benefits of the CLR (such as type safety, memory protection, etc.) be built into Windows itself?
Or into the browser? Why an entirely other virtual machine? (How may levels of virtual machine indirection are we dealing with now? We just added Silverlight - and before that Flash - running inside the Browser running inside maybe a VM install...)
I can see raw Windows for servers, but why couldn't there be a CLR for workstations talking directly to the hardware (or at least not the whole Windows legacy ball and chain)?
(ooppp - I've got two questions here. Let's make this - why can't .net be built into Windows? I understand about backward compatibility - but the safety of what's in .NET could be at least optionally in Windows itself, couldn't it? It would just be yet another of many sets of APIs?)
Factoid - I recall that one of the competitor architectures selling against MS-DOS on the IBM PC was UCSD-pascal runtime - a VM.
And let's not forget that DOS didn't morph into Windows, at least not the Windows we know and love today. DOS was the operating system, Windows 3.1 a GUI shell resting atop said operating system.
When Windows 95 came out, it is true that there was no more boxed product labeled "Microsoft DOS," but Windows 95, architecturally, was DOS 7.0 with a GUI shell resting atop.
This continued through Win98 and WinME (aka Win9X).
The Windows we know today (XP, Vista, 2003, 2008) has its core from the Windows NT project, a totally separate beast. (Although NT was designed to be compatible with 3.1, and later, 9x binaries, and used a near-identical but expanded API.)
DOS no more morphed into the Windows we are familiar with than the original Linux core morphed into KDE.
The two APIs will need to continue to coexist as long as there are products built natively against Windows which are still in a support cycle. Considering that the Windows API still exists in Windows Server 2008 and Windows 7, that means at least 2017. Truthfully, it will probably be longer, because while managed code is a wonderful thing, it is not always the most appropriate/best answer.
Plus ... As a programmer, you ought to know better than anyone: It's never as easy to do something as it might appear from the outside!
Windows is multi-million lines of code, most of it in C. This represents an enormous decades-long investment. It is constantly being maintained (fixed) for today's users. It would be completely impossible to stop the world while they rewrite every line in C# for ten years, then debug and optimise for another ten, without totally wrecking their business.
Some of the existing code could in theory be compiled to run on the CLR, but it would gain no benefit from doing so. Compiling a large subset of C to the CLR is possible (using the C++/CLI compiler) but it does not automatically enable garbage collection, for example. You have to redesign from the ground up to get that.
Well, for one the CLR isn't an operating system. That's a pretty big reason why not ... I mean even the research OS, Singularity, is not just the CLR. I think you should read up on some books about the Windows kernel and general operating system stuff.
Microsoft are still a few Windows releases away from that.
But they would start with something like Singularity I think.
because it would break backwards compatibility? and mainstream chips architecture doesn't line up with VM architecture? They made hardware for a Java VM a while ago, but nobody cared.
The biggest issue I see is that the CLR runs on a VM, and the VM is useful as a layer of abstraction. Some .NET apps can be run on Linux (see the Mono project, I think they are up to .NET 2 compatibility now), so that would all be gone. In C/C++ or languages that directly talk to the hardware, you have to recompile your code into different binaries for every OS and hardware architecture. The point of having the VM there is to abstract that, so that you can write the code, build it, and use the exact same binary anywhere. If you look at it from a Java perspective, they have done a much better job of using their VM as a "write once run anywhere" model. The same java classes will run on Windows, Mac, and Linux without rebuild (by the programmer anyway, technically the VM is doing that work).
I think the #1 point here is that .NET/CLR is NOT Windows specific, and IMO Microsoft would only help the .NET suite of languages if it put a little more effort toward cross-OS compatibility.
Because Microsoft has got a huge legacy they cannot just simply drop. Companies has invested lots of money for the Windows and Win32 software they cannot dismiss.
CLR or some VM maybe used (VM's are being used) to run an OS on top of it . But then the question is, what should one use to build the VM? Probably C/C++ or some other similar language and (most) probably Assembly in some cases to speed up things.
That would mean the VM will still have the problems that Windows (or any OS) faces now. As pointed out by others, some part of the OS and related applications may be ported (or as you said morphed) to be over the VM, but getting the entire OS on top of a VM dosen't serve much purpose. The reason being, the VM will be the real OS then, implementing garbage collection and other protective measures for the Morphed OS.
Those are my two cents. :)
What language would the CLR itself use? What APIs would it call? Say it needed to open a file or allocate memory or create a process, you think the CLR is going to do that? The CLR is built on top of native code. A managed OS would create overhead.
CLR is for app development, it is there to make it easy to make apps, and easy to make less buggy software. It uses a garbage collector, and they can destroy performance. They can be great too, but you usually end up with some kind of performance problems during development, caused by garbage collection.
They must make it backward compatible so they must make it have some kind of native API.
If you're saying let's make a pure 100% managed OS and forget backward compatibility or have some giant compatability later, all you're really saying is let's force a garbage collector onto everything, right? Besides a garbage collector and the portability guarantees you get by being CLI compliant, what are you getting? The algorithms and everything are still being compiled into native code by the time they execute, so the only really significant difference is the memory management.
I actually did see trends that CLR will get planted deeper into the software stack. I remember saw the newest windows software stack, some CLR related library get planted into lower levels.
But CLR won't morph into windows, we know backward compatibility is very important for the software ecology.