Writing low-level program like PartitionManager - delphi

I would like to learn how to write programs which may run without booting the operating system, like Norton Ghost or Paragon programs. I would like to be able to run the program from a CD or a USB stick.
Could you give me some pointers, please?

Basically - unless you use an existing one - you have to write your own operating system - it could be small, but it is an OS.
Writing it is a bit different from writing applications, because you have to interface with hardware directly (or through the BIOS). It requires a good knowledge of low-level programming, hardware devices specifications and processor architecture, especially if you need memory and have to switch a x86 processor to protected mode ("unreal mode" could be used, though) which uses a fairly complex mechanism. Some parts may need to be written in assembler to access the special "privileged" instructions used by "kernels" running at the most privileged level ("ring 0") in protected mode, and to handle interrupts.
You could start here http://wiki.osdev.org/Main_Page.

Delphine is an attempt to write a primitive OS using freepascal. It is not an active project anymore, but code is there for you to try.
ClassiOS is an OS written in Delphi.
A more professional solution is to go for a win32 compatible OS like On Time RTOS-32, buy a license and make a bootable stick/CD program in Delphi.
Note this an expensive solution, but used in lots of real-time critical systems. We implemented a more or less DOS clone used to boot any X86 system from a USB stick.

Related

Why do we need AML - ACPI Machine Language?

As I understand, ACPI defines a generic hardware programming model where operating system relies on the OEM firmware provided AML (ACPI machine language) code to manipulate the hardware.
In order to execute the AML code, operating system has to incorporate an AML interpreter.
So, it looks to me that firmware developers use AML to provide a control interface between platform hardware and operating system.
But do we really need AML?
I think ultimately the hardware can only be configured through the native instruction of the platform. So the AML interpreter must translate the AML into native instructions otherwise it cannot be executed on the platform.
But what's the point of using an intermediate language like AML? I mean though the AML is said to be platform-independent, which means I can use AML to describe my platform in a non-native way.
But the AML is part of the platform firmware in practice. And the entire firmware has already been built into the target platform's native instructions. So what good can it be to make such a little part of the firmware as platform-independent? Why not just use native instructions? There must be some way to let OS use it as well. And this way operating system doesn't need the AML interpreter at all. A lot of complexity can be avoided.
One of the big goals of ACPI over its predecessor APM was to give the OS more viability and control over power state transitions.
APM was a black box. The OS knew nothing about the power management implementation. It would just call a BIOS function and the BIOS handled all of the magic. Did it work? Did the system sleep properly? Did the system freeze? Was a user application able to handle the BIOS implementation? The sad truth was that many systems had power management that was downright broken, and Microsoft wanted to provide a better power management experience for the growing laptop industry.
Now, the BIOS hands the ASL/AML code over to the OS and the OS executes it not the BIOS. If the BIOS code does something dumb (like messing with registers it shouldn't), Windows can detect that by parsing the code and block it. AML is 100% decompilable unlike C.
Remember that ACPI is not x86 specific. At the time it was developed, Itanium and Xscale were around. Intel and Microsoft needed a language that would work on all platforms, both 32 and 64 bit.
Lastly, ASL is more than just a list of executable functions. It is also number of static configuration tables. The ASL code has tables to define the non PnP hardware built onto your motherboard. It has tables of supported power states. A traditional programming language like C isn't really setup for that.
If ACPI was invented today, they would probably use something like XML to provide the info to the OS.
Originally, hardware for "80x86 PC" was cloned from IBM's PC, and this created an effective de-facto standard for hardware to follow. However it didn't take long before manufacturers wanted to add features that didn't previously exist, where there was no (official or de-facto) standard to follow.
This led to a major problem for operating system software (how do you support "non-standard chaos"). Some standards were created for some things (APM, etc) but they didn't really cover everything needed and became out-of-date. ACPI was created to fix this.
Ideally, what was (and still is) needed is standards that allow operating system to detect and use supported features of the motherboard. For example, a "standardised case temperature and fan control" device (with support for detecting how many fans, temperature sensors, etc), or a "standardised CPU speed/power consumption", a "PCI slot IRQ routing for IO APICs" standard, a "hot-plug PCI controller device" standard, etc.
However, ACPI didn't provide useful standards that hardware manufacturers and operating systems can use. Instead, ACPI provided an over-engineered mess (AML) to allow an OS to cope with ACPI's failure to standardise the hardware.
Essentially; we "need" AML now because it's the only viable way for an OS to work-around the "non-standard chaos" problem that ACPI failed to fix.
The problem with providing native code instead of AML is that different operating systems use CPUs in different ways (e.g. native 64-bit 80x86 code in firmware would be useless for an older "32-bit only" OS). AML provides portability between different types of CPUs and between the same CPU/s in different modes.
Also; native code is considered a major security problem (rootkits, etc); and people tend to think an interpreted language mitigates that problem. Of course in practice AML needs far too much access to the underlying hardware and does it in a way that an OS can't check, and there's isn't even a way for an OS to determine if the AML has been maliciously modified before the OS booted. For these reasons AML is still a major security problem despite using interpreted language.

What will be the alternate of win32api for Linux? [duplicate]

I'm moving from windows programming (By windows programming I mean using Windows API) to Linux Programming.
For programming Windows, the option we have is Win32API (MFC is just a C++ wrapper for the same).
I want to know if there is something like Linux API (equivalent to WINAPI) that is exposed directly to the programmer? Where can I find the reference?
With my little knowledge of POSIX library I see that it wraps around part of Linux API. But what about creating GUI applications? POSIX doesn't offer that. I know there are tons of 3rd party Widget toolkits like gtk, Qt etc. But I don't want to use the libraries that encapsulates Linux API. I want to learn using the "Core Linux API".
If there are somethings that I should know, please inform. Any programmer who is familiar with both Windows & Linux programming, please map the terminologies of Linux world so that I can quickly move on.
Any resources (books,tutorials,references) are highly appreciated.
I think you're looking for something that doesn't exactly exist. Unlike the Win32 API, there is no "Linux API" for doing GUI applications. The closest you can get is the X protocol itself, which is a pretty low level way of doing GUI (it's much more detailed and archaic than Win32 GDI, for example). This is why there exist wrappers such as GTK and Qt that hide the details of the X protocol.
The X protocol is available to C programs using XLib.
What you must understand is that Linux is very bare as to what is contained within it. The "Core" Linux API is POSIX and glibc. Linux is NOT graphical by default, so there is no core graphics library. Really, Windows could be stripped down to not have graphics also and thus not have parts of the win32 API like GDI. This you must understand. Linux is very lightweight compared to Windows.
For Linux there are two main graphical toolkits, GTK and Qt. I myself prefer GTK, but I'd research both. Also note that GTK and Qt exist for Windows to, because they are just wrappers. If you go take a look at the X protocol code for say xterm, you'll see why no one tries to actually creating graphical applications on top of it.
Oh, also SDL is pretty nice, it is pretty bare, but it is nice if your just needing a framebuffer for a window. It is portable between Linux and Windows and very easy to learn. But it will only stretch so far..
Linux and win aren't quite as different as it looks.
On both systems there exists a kernel that is not graphical.
It's just that Microsoft doesn't document this kernel and publishes an API that references various different components.
On Unix, it's more transparent. There really is a (non-GUI) kernel API and it is published. Then, there are services that run on top of this, optionally, and their interfaces are published without an attempt to merge them into an imaginary layer that doesn't really exist.
So, the lowest GUI level is a the X Window System and it has a lowest level library called Xlib. There are various libraries that run on top of this one, as you have noted.
I would highly recommended looking at the QT/C++ UI framework, it's arguably the most comprehensive UI toolkit for any platform.
We're using it at work developing cross platform apps that run on windows, osx and linux.
It also runs on Nokia's smart phone Operating System Maemo which has recently been merged with Intel's Moblin Linux OS, now called MeeGo.
This is going to sound insane since you're asking about "serious" stuff like C++ and C (and the "core linux API"), but you might want to consider building in something else. For instance:
Java Swing (many people love it! Others hate it and call it obsolete)
Mono GTK# (C# or VisualBasic or whatever you want, lots of people say it's pretty cool, but they're not not that many people)
Adobe AIR (ActionScript, you might hate it)
Titanium (totally new and unproven, but getting a lot of buzz in the iPhone world, at least)
And many other possibilities, some of which let you work on multiple platforms at once.
Sorry if this answer is not at all what you're looking for. The "real" answers on Linux are "pick a toolkit," which is also no answer at all :)
Have a look at Cairo. This something roughly similar to GDI+ and is under the hood of some of of the few usable GUI programs for Linux i.e. Firefox or Eclipse (SWT). It wraps most the natsy and ancient Linux stuff for you into a nice API that runs on most Linux installations without locking you into a entire subsystems like GTK or QT.
There is also the docs for the two different desktop platforms: Gnome and KDE that might help you down that road.

Possible to use BIOS interrupts in Forth?

I am doing a class project comparing different programming languages. Is it possible to use BIOS interrupts in the Forth language? I can't seem to find any such information on this. If so what would be an example?
I think you're under a mistaken idea that there's a single all-encompassing "Forth" out there. There isn't. There are many Forth implementations. Those that run "bare bones" (without an OS) or under DOS can certainly be coaxed to access the BIOS APIs. Those that run under a 32 or 64 bit operating system like Windows or Linux are unlikely to provide such functionality, since the operating system makes it hard to run BIOS APIs to start with.
When running under Windows, using 16-bit BIOS APIs (as opposed to reading data without running BIOS code) is cumbersome. Modern BIOSes also offer 32-bit APIs, but in all cases you're limited to what hardware you can access (none) - this is enforced by the OS, not by the BIOS code.
Generally speaking, the BIOS APIs are cumbersome and there's no point to using them when you have a full-blown operating system available to you.
Now if you don't care much whether the BIOS calls access real hardware or emulated hardware, you can certainly use Forth to access something like DOSBox and run the real BIOS on emulated hardware. Heck, DOSBox provides its own BIOS implementation :)

Most suitable Unix platform for developing device drivers

I completely newbie in device drivers, so I hope my question is in place, but I need to develop a driver to control some equipment. I was thinking on using Linux as the host OS, but not sure if it is such good idea. I've heard some horror stories about the mess of developing device drivers under Linux. Is there a better alternative under the *Nix world? Or maybe should I check other OSes?
Linux documentation is basically non-existent (similar to other platforms). However, there are a few books which do cover enough information to get started, and the trickier kernel bits can borrowed from other drivers (yay for Open Source).
However, it is one of the easiest current platforms to develop drivers for. There are cleaner models, such as QNX, but that product is sadly near the end (and doesn't support 1/10th as much as hardware as Linux)
What type of device is the driver targetting? Many times you can avoid writing in-kernel drivers (for instance, using libusb in userspace, or the user space IO framework)

Tell me again why we need both .NET and Windows? Why can't Windows morph into the CLR?

The same way DOS morphed into Windows?
We seem to have ended up supporting and developing for three platforms from Microsoft, and I'm not sure where the boundaries are supposed to lie.
Why can't the benefits of the CLR (such as type safety, memory protection, etc.) be built into Windows itself?
Or into the browser? Why an entirely other virtual machine? (How may levels of virtual machine indirection are we dealing with now? We just added Silverlight - and before that Flash - running inside the Browser running inside maybe a VM install...)
I can see raw Windows for servers, but why couldn't there be a CLR for workstations talking directly to the hardware (or at least not the whole Windows legacy ball and chain)?
(ooppp - I've got two questions here. Let's make this - why can't .net be built into Windows? I understand about backward compatibility - but the safety of what's in .NET could be at least optionally in Windows itself, couldn't it? It would just be yet another of many sets of APIs?)
Factoid - I recall that one of the competitor architectures selling against MS-DOS on the IBM PC was UCSD-pascal runtime - a VM.
And let's not forget that DOS didn't morph into Windows, at least not the Windows we know and love today. DOS was the operating system, Windows 3.1 a GUI shell resting atop said operating system.
When Windows 95 came out, it is true that there was no more boxed product labeled "Microsoft DOS," but Windows 95, architecturally, was DOS 7.0 with a GUI shell resting atop.
This continued through Win98 and WinME (aka Win9X).
The Windows we know today (XP, Vista, 2003, 2008) has its core from the Windows NT project, a totally separate beast. (Although NT was designed to be compatible with 3.1, and later, 9x binaries, and used a near-identical but expanded API.)
DOS no more morphed into the Windows we are familiar with than the original Linux core morphed into KDE.
The two APIs will need to continue to coexist as long as there are products built natively against Windows which are still in a support cycle. Considering that the Windows API still exists in Windows Server 2008 and Windows 7, that means at least 2017. Truthfully, it will probably be longer, because while managed code is a wonderful thing, it is not always the most appropriate/best answer.
Plus ... As a programmer, you ought to know better than anyone: It's never as easy to do something as it might appear from the outside!
Windows is multi-million lines of code, most of it in C. This represents an enormous decades-long investment. It is constantly being maintained (fixed) for today's users. It would be completely impossible to stop the world while they rewrite every line in C# for ten years, then debug and optimise for another ten, without totally wrecking their business.
Some of the existing code could in theory be compiled to run on the CLR, but it would gain no benefit from doing so. Compiling a large subset of C to the CLR is possible (using the C++/CLI compiler) but it does not automatically enable garbage collection, for example. You have to redesign from the ground up to get that.
Well, for one the CLR isn't an operating system. That's a pretty big reason why not ... I mean even the research OS, Singularity, is not just the CLR. I think you should read up on some books about the Windows kernel and general operating system stuff.
Microsoft are still a few Windows releases away from that.
But they would start with something like Singularity I think.
because it would break backwards compatibility? and mainstream chips architecture doesn't line up with VM architecture? They made hardware for a Java VM a while ago, but nobody cared.
The biggest issue I see is that the CLR runs on a VM, and the VM is useful as a layer of abstraction. Some .NET apps can be run on Linux (see the Mono project, I think they are up to .NET 2 compatibility now), so that would all be gone. In C/C++ or languages that directly talk to the hardware, you have to recompile your code into different binaries for every OS and hardware architecture. The point of having the VM there is to abstract that, so that you can write the code, build it, and use the exact same binary anywhere. If you look at it from a Java perspective, they have done a much better job of using their VM as a "write once run anywhere" model. The same java classes will run on Windows, Mac, and Linux without rebuild (by the programmer anyway, technically the VM is doing that work).
I think the #1 point here is that .NET/CLR is NOT Windows specific, and IMO Microsoft would only help the .NET suite of languages if it put a little more effort toward cross-OS compatibility.
Because Microsoft has got a huge legacy they cannot just simply drop. Companies has invested lots of money for the Windows and Win32 software they cannot dismiss.
CLR or some VM maybe used (VM's are being used) to run an OS on top of it . But then the question is, what should one use to build the VM? Probably C/C++ or some other similar language and (most) probably Assembly in some cases to speed up things.
That would mean the VM will still have the problems that Windows (or any OS) faces now. As pointed out by others, some part of the OS and related applications may be ported (or as you said morphed) to be over the VM, but getting the entire OS on top of a VM dosen't serve much purpose. The reason being, the VM will be the real OS then, implementing garbage collection and other protective measures for the Morphed OS.
Those are my two cents. :)
What language would the CLR itself use? What APIs would it call? Say it needed to open a file or allocate memory or create a process, you think the CLR is going to do that? The CLR is built on top of native code. A managed OS would create overhead.
CLR is for app development, it is there to make it easy to make apps, and easy to make less buggy software. It uses a garbage collector, and they can destroy performance. They can be great too, but you usually end up with some kind of performance problems during development, caused by garbage collection.
They must make it backward compatible so they must make it have some kind of native API.
If you're saying let's make a pure 100% managed OS and forget backward compatibility or have some giant compatability later, all you're really saying is let's force a garbage collector onto everything, right? Besides a garbage collector and the portability guarantees you get by being CLI compliant, what are you getting? The algorithms and everything are still being compiled into native code by the time they execute, so the only really significant difference is the memory management.
I actually did see trends that CLR will get planted deeper into the software stack. I remember saw the newest windows software stack, some CLR related library get planted into lower levels.
But CLR won't morph into windows, we know backward compatibility is very important for the software ecology.

Resources