Beyond allowing one file to use another file's attributes, what actually happens behind the scenes? Does it just provide the location to access to that file when its contents are later needed, or does it load the implementation's data into memory?
In short;
The header file defines the API for a module. It's a contract listing which methods a third party can call. The module can be considered a black box to third parties.
The implementation implements the module. It is the inside of the black box. As a developer of a module you have to write this, but as a user of a third party module you shouldn't need to know anything about the implementation. The header should contain all the information you need.
Some parts of a header file could be auto generated - the method declarations. This would require you to annotate the implementation as there are likely to be private methods in the implementation which don't form part of the API and don't belong in the header.
Header files sometimes have other information in them; type definitions, constant definitions etc. These belong in the header file, and not in the implementation.
The main reason for a header is to be able to #include it in some other file, so you can use the functions in one file from that other file. The header includes (only) enough to be able to use the functions, not the functions themselves, so (we hope) compiling it is considerably faster.
Maintaining the two separately most results from nobody ever having written an editor that automates the process very well. There's not really a lot of reason they couldn't do so, and a few have even tried to -- but the editors that have done so have never done very well in the market, and the more mainstream editors haven't adopted it.
Well i will try:
Header files are only needed in the preprocessing phase. Once the preprocessor is done with them the compiler never even sees them. Obviously, the target system doesn't need them either for execution (the same way .c files aren't needed).
Instead libraries are executed during the linking phase.If a program is dynamically linked and the target environment doesn't have the necessary libraries, in the right places, with the right versions it won't run.
In C nothing like that is needed since once you compile it you get native code. The header files are copy pasted when u #include it . It is very different from the byte-code you get from java. There's no need for an interpreter(like the JVM): you just feed it your binary stuff to the CPU and it does its thing.
Related
I'm in process of making a new iOS tweak. I grabbed iOS Headers https://github.com/MP0w/iOS-Headers.
Later on I figured out another repository on Github named iOS Runtime Headers https://github.com/nst/iOS-Runtime-Headers
Now i'm confused. What is the difference between these two?
There are 3 main sources for headers: from the developer of the code, from class-dump, and from a runtime header dumping tool.
Apple or SDK developers will release a header file that includes the public interface they intend other developers to use. It might not include some methods/variable declarations they don't want you to see. UIView.h from Apple's SDK would be a great example of something they're hiding certain info from.
Just because they didn't include those methods in the header file, doesn't mean that instances of those classes can't respond to them. This is where a tool like class-dump comes in, which looks through the compiled Mach-O files to determine which methods/ivars the class contains, and generate a header according to that.
Entirely new classes, methods, and ivars can be added to removed during runtime, using Objective-C's runtime features. Things like categories that get loaded from other SDKs/object files won't appear in a class-dump of the original class either. For these reasons, runtime dumping tools can see what instances of these classes can actually respond to during runtime.
Each set of headers can be useful in determining the intended and unintended uses for a class, knowing the differences can help you get a clearer picture of whatever you're reverse engineering.
It's hard to say what the differences are. They are both sets of headers generated with runtime introspection. The first one says it's headers both public and private.
Just so everybody understands, both links point to sets of headers that give you access to private OS APIs. Using these will get you rejected from the app store. They're only really useful for developing apps for the developer's personal use, or for jailbroken development.
The vapi file that is available for librsvg-2.0 contains a lot less than what the actual library contains
vapi: http://valadoc.org/#!wiki=librsvg-2.0/index
library: https://git.gnome.org/browse/librsvg/tree/
I would have expected to have access to components like an RsvgNode to be able to access and alter the SVG contents directly, but neither the vapi nor the header files that are installed with the devel package contain a lot of what's in the library headers. I assume this has something to do with making the library GObject friendly, but I'm interested in more than what's there.
Is there a way to add headers, extend the vapi, and use the structs and functions that I need?
It's possible that this is not even what I should be doing, the contents of the library use the G_GNUC_HIDDEN macro pretty liberally suggesting that they don't want to give you access. But then I'm wondering how you can edit an SVG document/element live while displaying it in a Cairo context? I'm sure I could edit it using libxml, but I don't know how to refresh the context without reloading the SVG data and recreating the surface.
Thanks.
Just asked Christian Persch about this on IRC. His response was:
that's right, all that stuff is not exported, and it's not in any state to be exported. there is no way with librsvg to change the svg without creating a new context and loading the new svg xml into it
If the library doesn't export the stuff on the C level there isn't really a lot you can do at the Vala level. Creating bindings wouldn't be very difficult, but the API that it binds really needs to be public.
Depending on your use case, perhaps you'd be happier using Clutter?
I am currently migrating a large RAD Studio 2010 project to XE4. As part of this, I am recreating many of the project files. I would like to take the opportunity to ensure we are using the best possible mechanism for precompiled headers, since there seem to be a few ways to do it.
Right now we are compiling for 32-bit only but will use the 64-bit compiler in future.
Here's what we're currently doing in 2010, and why I'm unsure about what to do in XE4:
In RAD Studio 2010
We have a file PchApp.h which includes <vcl.h> and a number of other commonly-used header files, mostly headers for various commonly-used core classes in the project. This header is included at the top of every CPP file followed by #pragma hdrstop, like so:
// Top of .cpp file
#include "PchApp.h"
#pragma hdrstop
// Normal includes here
#include "other.h"
#include "other2.h"
// etc
We then have the following settings in the Precompiled Headers section of the project options:
It is not particularly fast to compile (12 minutes for circa 350,000 lines of code.) I am unsure about:
"Inject precompiled header file": should this inject PchApp.h?
"Cache precompiled headers (Must be used with -H or -H"xxx")": the -H option is the "PCH filename", so we are using it, but surely the point of a precompiled header is that it is "cached" or prebuilt once per compile. What extra difference does this make?
Should we have the two lines to include PchApp.h and the pragma hdrstop in the .cpp files? Is there a way to do this in the project options only, and not duplicate these two lines in every single file? Are they necessary?
In other words, I am not sure these are correct or optimal settings, but from reading the documentation I'm equally not sure what would be better. I am aware I don't understand all the options well enough - one reason for this question :)
In RAD Studio XE4
The XE4 32-bit compiler's options dialog is the same, but two things confuse me and/or make me uncertain the current 2010 approach is the best.
1. Default behaviour
When creating a new VCL Forms project, the IDE creates a header named by default Project1PCH1.h, which is intended to be the project's precompiled header. This header includes <vcl.h> and <tchar.h>, and is shown as a node in the Project Manager. It is not included in the default Form1.cpp, but #include <vcl.h> followed by #pragma hdrstop is at the very top of Form1.cpp, followed by other headers.
The default XE4 settings dialog for a new project using this header is:
I am (naively?) working on the assumption the defaults are actually the best / most optimal settings. Some things puzzle me:
The project's supposed precompiled header Project1PCH1.h is not mentioned in the precompiled header settings anywhere.
The headers aren't cached
The PCH filename isn't specified (should this be Project1PCH1.h?)
The .cpp files don't include Project1PCH1.h either.
In fact I have no idea how the compiler or IDE actually know that it is supposed to use Project1PCH1.h or for which .cpp files it is supposed to use it, since it isn't referred to in any way I can find.
This is the most puzzling thing to me, and the spur to ask this question and clear up all my confusion about PCHes. I had planned to copy/use the IDE's default settings, but I don't want to until I understand what they are doing.
2. PCH Wizard
Since 2010, the IDE has included a precompiled header wizard. I haven't ever been able to get it to work - I am running it again right now to get its results and explain my memory of "doesn't work", but it seems to take several hours, so I will update this question later.
Edit: it runs, though it takes several hours, and produced a list of (to me, knowing the source base) odd headers. My recollection of trying it several years ago is that it didn't run at all - a definite improvement.
Since it exists, it may be the best way to set up using precompiled headers in a newly created project file formed to upgrade the 2010 project. How do I best do so? Will all the .cpp files including PchApp.h confuse it?
Questions
With that as background, I have the following questions:
Existing settings. I am creating a new project file and adding thousands of pre-existing .cpp files, all with "#include PchApp.h; #pragma hdrstop" at the top. Should I copy the existing RS2010 PCH settings? Should I remove the above two lines and replace them with something else?
Using the PCH wizard: Does this, in your experience, create optimal settings? Does it include files that, if modified, will cause large areas of the project to be rebuilt (probably non-optimal for coding)? Is it possible to use on an existing project, or do items like our "#include PchApp.h" need to be stripped out before using it?
CPP files / units and the correct includes. Should .cpp files that use precompiled headers not include the precompiled header itself, but only the headers that the .cpp actually needs, even if the PCH includes those? What if you have our current situation, where the PchApp.h file includes several common headers and so the .cpp files don't actually include those themselves? If you remove the inclusion of PchApp.h and replace it with the subset of headers in PchApp.h that the specific .cpp files needs, should they be above or below the #pragma hdrstop? (Above, I think.) What if you then include something else above with them which is not included in the precompiled header - will it change PCH usage for that specific unit, cause the PCH to be rebuilt (performance issues?), etc?
Default setup: Assuming the default setup for a new project is optimal, how is best to migrate the current system to using it?
Non-default setup: If the default setup is not optimal, what is? This, I guess, is the key question.
32 and 64-bit: Knowing that we'll move to 64-bit soon, what should we do to have precompiled headers work on both 32 and 64 bit? Should all PCH knowledge be in the project options rather than .cpp files, so different settings for 32 and 64-bit compilation?
I am seeking a clear, detailed, explanatory, guiding answer, one that clearly explains the best
practice, setting options, items to include in the .cpp
files, header, and/or project file, and so forth - in other words, something to clear up my by now (after all the above!) rather confused understanding. A high-quality answer that can be used as the go-to PCH reference in future by other C++Builder users in future would be excellent. I intend to add a bounty in
a couple of days when I am able to.
Existing settings. In my experience I have changed these settings usually, because if you have hundreds of files - it's just does not seem to be optimal. In xCode i.e. it's the default configuration. There should be no compilation performance difference.
Using the PCH wizard Honestly I have never used it in real project, and it haven't impressed me, so just forgot about that and used manual settings.
CPP files / units and the correct includes. Different IDEs have different default settings for that. What I have usually used is:
Inject precompiled headers automatically (no manual #include in .cpp)
First include appropriate header matching .cpp if one exists (myfile.cpp - then include myfile.h)
After that include all the specific headers that do specific job (specific lib headers, etc.)
In "myfile.h" include ONLY stuff that is a must. Avoid any stuff you can avoid.
Everything you include specifically for a particular .cpp file should be below #pragma hdrstop. Everything you want to be precompiled should be above.
Default setup I don't think it's optimal. As for me it's much easier to migrate just changing a couple of options in the settings.
Non-default setup As I have mentioned above - as for me the optimal set up is with automatic injection of precompiled header. More details in item 3.
32 and 64-bit haven't experienced any problems with that. It should generate own precompiled headers for every particular configuration.
Here's what I do (although I am not sure if it is a good idea or not but it seems to work)
Make sure Project1PCH1.h exists (where Project1 is the name of the project)
Make it contain #pragma hdrstop and 2 trailing newlines (I got weird errors when I didn't have trailing newlines, maybe compiler bug)
In "All Configurations" put into "Inject precompiled header file" then name "Project1PCH1.h"
Do not do anything such as #include "PchApp.h" nor #pragma hdrstop in the other files.
Check everything builds correctly (i.e. files have the right includes on their own merit, not relying on the injected PCH)
Put some includes into the Project1PCH1.h. I use the wizard to come up with some suggestions, but you have to apply some human logic as well to get a good build.
When it's working properly in 32bit mode everything compiles lightning quick; you can tell if you have not quite got something right if you're compiling your project and one particular .cpp file takes a lot longer than the rest. The wizard makes suggestions based on how many files include the given header, but that's somewhat bogus; you need to include in it any system header (or boost header etc.) that would add significantly to the compilation time if it were not part of the PCH.
I don't bother to include my own project headers in it, just system and standard headers. That may differ for you depending on your project, IDK.
The PCH doesn't work for .c files so if you have any of those in your file you'll need to make Project1PCH1.h have #ifdef __cplusplus guards.
Also: even though bcc64 doesn't support PCH (but it does inject the file), if you do have your PCH set up right it does seem to make compilation go a fair bit faster, I'm not exactly sure why.
Things I don't understand about it yet:
Why does the New Project wizard autogenerate Project1PCH1.h but not actually set that in the "Inject Precompiled Header" field of Project Properties?
Sometimes the build fails saying it cannot open Project1PCH1.h but if I make some changes and re-save it it usually seems to fix this.
I am working on an iOS app which links several static libraries. The challenge is, those linked libraries define same method names with different implementations. Oddly, I don't get any duplicate symbol definition errors; but, to no surprise, I end up with access to only one implementation of the method.
To be more clear, say I have libA and libB and they both define a global C method called func1()
When I link both libA and libB, and make a call to func1(), it resolves to either libA's or libB's implementation without any compilation warning. I, however, need to be able to access both libA's func1() and libB's func1() separately.
There's a similar SO post that explains how it can be done in C (via symbol renaming) but unfortunately, as I found out, objcopy tool doesn't work for ARM architecture (hence iPhone).
(I will submit it to the App Store, hence, dynamic linking is not an option)
It appears that you are in luck - you can still rename symbols with the ARM binary format, it's just a bit more hacky than the objcopy method...
NOTE: This has only been tested minimally, and I would strongly advise you to make a backup of all libraries in question before trying this!
Also note that this only works for files not compiled with the C++ compiler! This will fail if the C++ compiler was used on these files.
First, you will need a decent hex editor, for this example, I will be using Hex Fiend.
Next, you will open up a copy of your of of your libraries, let's call it lib1-renamed.a, and do the following with it:
Find the name of the symbol you wish to re-name. It can be found using the nm tool, or, if you know the header name, you should be set.
Next, you will use hex fiend, and to a textual replace of the old name (in this case foo), and give it a new name (in this case, bar). These names must have the same length, or it will corrupt the binary's offsets!
Note: if there is more than one function that contain's foo's name in it, you may have problems.
Now, you must edit the headers of the library you changed, to use the new function name (bar) instead of the old one.
If you have done the three simple† steps above properly, you should now be able to compile & link the two files successfully, and call both implementations.
If you are trying to do this with a universal binary (e.g. one the works on the simulator as well), you'd be best off using lipo to separate the two binaries, using objcopy on the i386/x64 binary, and then using my method on the ARM binary, and lipo it back together.
†: Simplicity is not guaranteed, nor is it covered by the Richard J. Ross III super warranty. For more information about the super warranty, call 1-800-FREE-WARRANTY now. That's 1-800-FREE-WARRANTY now!
J2ME lacks the java.util.Properties class. Although it is possible to put application settings in the JAD file this is not recommended for many properties. (Since, some platforms limits the size of JAD file.) I want to put a configuration file inside my jar file and parse it. And I do not want to go with XML because it will be overshooting for my case.
Question is, is there an already existing library for J2ME that can parse properties files or something similar such as INI file. Or would you recommend another method to solve the initial problem?
The best solution probably depends on what is going to be generating the properties files.
If you've got other non-JavaME projects using the same properties files, then stick with them, and write or find a parser. (There is a simple one from GoBible available on Google Code)
However you might find it just as easy to keep your configuration as static final String myproperty="myvalue"; in a Configuration.java file which you compile, and include in the jar instead, since you then do not need any special code to locate, open, read, and parse them.
You do then pick up a limitation on what you call them though, since you can no longer use the common dot separated namespacing idiom.