Portability vs Platform Independent - comparison

Can someone explain me difference of these? If the program is platform independent doesn't it make it portable too and the opposite?

Platform independent means, these programs can run in "nearly" all operating systems. Doesn't need to be all, but at least MS, Linux and Mac will be fine to use this word.
And, lets dig out the facts behind the word "platform independent".
NOTE: The following sentences are my opinions. If anyone read, couldn't understand the logic behind or doesn't like, just can press on buttons CTRL + W to close. I noted these because Java programmers getting seriously mad when they face this sentences. But it is always open for discussion.
Check please: http://en.wikipedia.org/wiki/Platform-independent_model
Ok don't break the topic and turn back.
Actually and logically, any program needs a platform installer can't be considered as platform independent. For example, if I can't run Java executables without downloading and installing java runtime packages services etc... So how we can say that it is platform independent. If we can say, so nearly 80% of the windows executables are also platform independent since you can run them with virtual machines or WINE on Linux etc. And sure, if Java is platform independent, PHP, ASP, Perl, Python, Ruby etc. all scripting languages are also platform independent!? Aah, sure not...
Hope you got the logic.
But what we can do is, we can compile our (own) softwares for many different OS's. So our software will be "cross platform"
What can be platform independent in real manner is
(as I wrote in top, my opinions)
Uncompiled assembly, C etc codes...
And when it comes to portable, this is something else. For example, the word "portable software" under "windows" operating system means;
Doesn't use registry or Appdata folder for its files or settings.
Works under its own folder, all files needed for it is located under it's own folder.
Also saves it's settings to a file (ini etc) under it's own folder.
And if we go a bit further in meaning, even mustn't rely on specific hardware/software brand, model or unusual mode (like "x" brand screen card, "y" resolution, "z" release of DirectX etc.). But actually you can just ignore the last detail since this criteria is not mature yet to be accepted by all.

Related

Detailed Explanation on "Re-Hosting" and "Retargeting" both for compilers and binary data (such as .exe or .obj)

Sometimes in s/w companies, customers provide data in multiple formats. There are linkable and executable data that are said to be "Rehosted" and compiled object files that are said to be "Retargeted". I am trying to understand what rehosting and retargeting mean in this area. Is it similar to the Bootstrap theory in computer science? I have the understanding of the following process (if not incorrect):
PROBLEM:
I need to write a compiler for a new language called "MyLang" to run on PowerPC
Solution:
1. I need to write a compiler for a language "MyLang-Mini"; a subset of "MyLang" to run on PowerPC.
2. I need to write a compiler for "MyLang" using "MyLang-Mini" to run on PowerPC.
3. I run the compiler obtained from no. 1 through the compiler obtained from no. 2 to
obtain the compiler for MyLang to run on PowerPC.
IN BESPOKE "T" DIAGRAM (...ISH):
MyLang PowerPC MyLang PowerPC
MyLangMini MyLangMini PowerPC PowerPC(instr.)
PowerPC(instr.)
What I am getting confused about is rehosting and retargeting. How are they coonected to this concept? What am I rehosting and retargeting if I have some binary data such as .exe or .obj? I would appreciate some detailed explanation if possible please!
I know that this will embark onto "CROSS-COMPILERS", but would prefer expert opinions to be sure.
Thanks in advance.
I now know that in s/w engineering:
REHOSTING - If you have a third-party application linkable/executable that requires usage on your host machine, you do rehosting. The target in this case are most often the same (OS platform, processor, etc.). In worst case, there is a virtualisation required. The rehosted application will run as if it was one of the application running in the host machine
RETARGETTING - If you have a third-party source code, you might need to recompile that to match with your target environment. It may also be that you have third-party .o or .obj compiled models and you want to link them with your source code (retargeted) in order to host it on a host machine. Just like REHOSTED application, it will be as if the application was installed on the host machine.
It will be good to know how this is similar to the compiler rehosting and retargeting. Sorry, I am a newbee is this area and will appreciate even a slap on the wrist.

Could I install Delphi and my libraries on a USB key in such a way as to allow debugging of my app on a customers PC?

Back in the days of Delphi 7, remote debugging was mostly ok. You set up a TCP/IP connection, tweaked a few things in the linker and you could (just about) step through code running on another PC whilst keeping your Delphi IDE and its libraries on your development PC.
Today, with Delphi XE2,3,4 you have paserver which, at least at the moment can be flaky and slow. It is essential for iOS (cross platform) development, but here at Applied Relay Testing we often have to debug on embedded PC's that run recent Windows. To do this we have employed a number of strategies but the hardest situation of all is to visit a customer site and wish that one could 'drop in' a Delphi IDE + libraries and roll up ones sleeves to step through and set breakpoints in source code.
It is quite likely - hopefully - that the paserver remote debugging workflow and its incarnations will improve over time but for now I got to wondering how it might be possible to install Delphi + libraries + our source code on a USB key so that with only a minimal, perhaps automated setup, one could plug that key into a PC and be compiling, running and debugging fairly quickly.
I can see that the registry is one of the possible issues however I do remember that Embarcadero once talked about being able to run their apps from a USB key. Knowing how much of a pain it is to install my 20-odd libraries into Delphi though, it is not trivial and needs thinking about.
Has anyone done anything like this or have any ideas of how it might be done?
Delphi does not support what you are asking for. But what you could do is create a virtual machine with your OS, IDE, libraries etc installed in it, then copy the VM onto a USB drive, install the VM software on the customer system, and run your VM as-is. There are plenty of VM systems to choose from.
First, I need to get this out of the way: embedded PCs running Windows?? Sob.
Ok, now for a suggestion: if a full virtual machine isn't an option for this task, application-level virtualization may be. This intercepts registry calls and other application-level information and maps them to a local copy, allowing essentially any application to be turned into a portable version. The good news is that there are free versions of several programs that can turn Windows programs into virtualized apps.
The only one I've personally used is MojoPac, and found it delivered as promised although was very slow running off of a (old, very slow) flash drive.
http://lifehacker.com/309233/make-any-application-portable-with-mojopac-freedom
I haven't used this newer "freedom" version though.
Two other programs I've seen that appear to be popular are Cameyo:
http://www.techsupportalert.com/content/create-your-own-portable-virtual-version-any-windows-program.htm
and P-Apps,
http://dottech.org/26404/create-a-portable-version-of-any-software-with-p-apps/
but I can't vouch for the quality of either of these two.
Hope this helps.

Cross compiling for several systems at once

Question: How do make a single make file to compile several different systems, environments, and sets of libraries at once?
Info:
I'm a student and as such most of my work is done for the sake of learning how these things work. Right now I'm building a game engine from the ground up. I'd like to be able to make it cross platform in terms of OS, but also for different environments. My target environments are both 32 and 64 bit (my desktop as well as my netbook), with a graphics card and with mesa, and linux and windows. so overall it should out put 8 binaries.
I'm still very new to make, as well as the whole concept of cross compiling. I imagine that the process of compiling more than 1 binary isn't hard. but where I'm kind of stuck is how do i get it to attach the right libraries? The Ubuntu Linux vs the WinAPI libraries, 32bit vs 64bit libraries. etc etc. Is it even possible to get libraries in such a manner?
If you need me to clarify further I can. Thanks.
Addendum: Basically I want to know how to compile headers for drivers i may not have. for example. I want to compile all the files on my netbook, including the ones for openCL, I don't want to run them, as my netbook has no GPU, I just want to compile. conversely, I want to use my desktop compile for my netbook which uses ocelot and mesa for its gpu dealings, but my desktop does not have mesa or ocelot on it. that sort of thing. Thanks.

What's the difference between retail symbols and checked symbols?

Windows XP with Service Pack 3 x86 retail symbols, all languages (File size: 209 MB - Most customers want this package.)
Windows XP with Service Pack 3 x86 checked symbols, all languages (File size: 202 MB)
Quoted from here.
What's the difference between retail symbols and checked symbols?
In general, the difference between "retail" and "checked" is similar to a "release" versus "debug" build. Microsoft provides two different kernels, one compiled for regular use and one with extra debug information. The two different builds also have two different symbol tables.
If you are an IT or Computer Science student in college (or if you happen to have access to MSDN's e-Academy software), you will probably have access to the special debug/checked builds of Windows Vista/7. Some professionals in the software development and engineering industries may have installations of the special debug builds as well. Otherwise, whether you come across Home or Professional editions--even Enterprise and Business editions--it will most likely be the retail version. All of those versions will require the retail version of the debugging symbols. However, if you have a debug/checked build of Windows installed, you will need the checked debug symbols.
As Greg has explained, the debugging symbols are basically an address. As far as I understand, they're basically a proper name for a function or item in memory, so when a user is debugging a process or viewing a callstack, he or she will be able to see usable information instead of address offsets.
Greg answered this already as well, but I'll try to elaborate. The retail and debug builds of Windows need different versions of symbols because the operating system files are compiled differently to include more useful debugging information. This makes the addresses for the symbols move ever so slightly, so a different package is required to correctly identify everything in memory.
The one thing I'm confused by is why the checked symbol package is smaller. I would have figured it would be bigger. A guru might know the reason for that. Speaking of which, I'd like to make it clear that I'm no debugger. I'm just fascinated with the science behind it. Nonetheless, I hope this helped you out.
Good luck gdb.
For practical purposes, desription of both packages was given in microsoft article https://developer.microsoft.com/en-us/windows/hardware/download-symbols . For been precise,
"Almost all customers require the symbols for the retail version. If you are debugging a special version of Windows with extra debugging information, then you should download the symbols for the checked version."
In other words, most likely you need retail version.

Grouping DLL's for use in Executable

Is there a way to group a bunch of DLL's and still use them at run time (not zipped up). Sorry this question sounds terse and stupid, but I'm not sure what more to ask.
I'll explain the situation though:
We've had two standalone Windows Applications and now one of our Applications has swelled to such ungainly proportions that the other application cannot run outside of the scope of the first app. We want to maintain some of the encapsulation we had while letting the smaller program in on some of the bigger program's features.
There is no problem in running the application, other than we don't want to send out all the 20-30 DLL's that the smaller project has.
It is possible to do this by adding startup code which checks if the DLLs are present on the target system and if not then extracts them from the resources section (or simply tagged onto the end of the exe). A good example of this being done is Process Explorer - it's distributed as a single binary, but when run it extracts and installs a driver.
If you have a situation where most, or all, of those assemblies have to be kept together, then I would highly recommend just merging the code files into the same project and recompiling. This would leave you with one assembly.
Of course there are other considerations like compile time, overall size of the final dll, how often various pieces change, and whether each component is deployed without the others.
One example of a company that did this is Telerik. Their dev components are all compiled into the same assembly. This makes deployment an absolute breeze. Contrasting that is Dev Express which put just about each control into it's own assembly. Because of this just maintaining, much less deploying, a Dev Express project is not something for the faint of heart.
(I don't work for either of those companies. However, I have a lot of experience with both toolkits.)
You could store the DLLs as Resources, and use BTMemoryModule, which essentially allows you to LoadLibrary on a Stream.
That way you could compile-in the multiple DLLs straight into the EXE or into a single resource DLL.
see http://www.jasontpenny.com/blog/2009/05/01/using-dlls-stored-as-resources-in-delphi-programs/

Resources