I see in many frameworks, they have something like a Globals.h which simply just imports all header files. Making it very easy elsewhere to just import that single Globals.h header to access everything.
I'm not using any specific framework, I'm just working on a standard, relatively simple project. I'm doing a lot of importing and I started to think can't I just apply this technique?
Can I? Or will it lead to potential problems? Like recursive importing? Just wondered if there were any particular methods or situations you would use this in or things to watch out for?
Thanks.
Doing this inside a framework makes sense. You don't want the end user to import a bunch of classes so you instruct him to import just one. That header will take care of the rest.
Doing something similar inside a project can help you simplify things if you always import a.h b.h and c.h together, creating a header file called abAndC.h would also make sense.
As a side note I always import my Constants.h in my pch file to avoid importing it through my project.
Importing everything everywhere doesn't make sense to me. If that file won't be included in the pch file it can harm a compile time and moreover you expose so many class details everywhere - that should be considered as a bad practice. The file with a global imports is a good idea if it contains only a very popular headers (I would put there imports for classes that occurs in, let's say, 30% classes or so). Then I would include that global importing header into pch file as it not only makes it visible everywhere in the app but also helps to reduce compiling time. Moreover remember to use modules for standard libraries wherever it is possible (it also reduces compile time not only in case of including from pch file). Except the popular classes I wouldn't expose implementation details in header file if it is not necessary - instead use forward declaration.
Related
I made a framework, The files I chose to make public were .h and .m files. I found that if I modify the contents of the .m file directly, it won't take effect. So what should I do to take effect?
Maybe I'm misunderstanding your question, but in the abscence of other answers let's see if I can help:
I made a framework
So you wrote some text into some files; then you used a tool, probably Xcode, to invoke the compiler, which interpreted that text as Objective-C and produced machine code in another file, and then constructed a framework bundle for you.
The files I chose to make public were .h and .m files. I found that if I modify the contents of the .m file directly, it won't take effect.
So now you edit your text file, and what do you expect to happen - not sure. Do you expect the framework code to change? If so aren't you missing a step compared to the above?
So what should I do to take effect?
Well that depends on what your goal is here. If you want your users to be able to customise your framework in some way then you need to design a method to do that using whatever tools you can when iOS is your target (Apple has rules).
This answer isn't much, but hope it helps.
I have a theory, but I don't know how to test it. We have a fairly large iOS project of about 200 Swift files and 240 obj-C files (and an equal amount of header files). We're still on Swift 1.2, which means that quite regularly, the entire project gets rebuilt.
I've noticed that each .swift file takes about 4-6 seconds to compile; in other projects this is at most 2.
Now, I've noticed that in the build output, warnings generated in header files get repeated for every .swift file, which makes me believe the swift compiler will re-parse all headers included in the bridging header. Since we have ~160 import statements in the bridging header, this kinda adds up.
So, basic questions:
Does the size of our bridging header impact build times?
Is there any way to optimize this, so it parses the headers only once?
Does Swift 2 have this same issue?
Any other tricks to optimize this? Besides rewriting everything in Swift, that's kinda too labor-intensive a project for us to undertake at this time.
Does the size of our bridging header impact build times?
Absolutely. The more files included in your bridging header, the more time it takes the compiler to parse them. This is what a Precompiled Header attempted to fix. PCH files have been phased out in favor of Modules.
Is there any way to optimize this, so it parses the headers only once?
To be honest I don't know, it depends on your source files and dependencies.
Does Swift 2 have this same issue?
Yes, but compiler optimization is much better in the newer versions of Xcode and Swift. Again, stressing Modules instead of Precompiled Header files here. I should note that it is possible to pass a pch file directly into clang, but that's rarely a good idea.
If you can, I'd experiment with using a pch header in the hybrid project. I'd also consider creating precompiled libraries or static frameworks to prevent the constant rebuilding of classes. There's a great WWDC video from 2013 which introduces modules, I highly recommend watching it.
References:
Modules and Precompiled Headers
WWDC 2014 Session 404 - Advances in Objective-C
Why isn't ProjectName-Prefix.pch created automatically in Xcode 6?
I can only talk from the experience I have at my previous workplace, meaning some things might have changed. Also, I'm not sure if this helps your specific case, since you mix Objective C and Swift which I have never done, but the theory is still sound.
In short, yes, the size of the bridging header affects compile times and you're correct that it parses it once for every file/inclusion.
The correct way to optimise this seemed to be to split the project up into modules (also called "frameworks" at some point) because each module is compiled individually and thus not recompiled if nothing has changed.
Working on a large open-source project, we've hit this problem, so this makes a good case study / example:
Our library implements SVG spec
SVG Spec is defined as "including" the DOM and CSS Specs
DOM Spec requires a DOM implementation, but Apple refuses to share their DOM implementation on iOS
We had to re-implement DOM in ObjectiveC, so we could correctly implement SVG
But Apple has accidentally/deliberately put some classes in the global namespace that use the reserved names from DOM. It is impossible for anyone to make a new class with those names
Our current workaround:
We rename affected classes from e.g. "Name" to "AppleHasConflictedThisInGlobalNameSpaceName". Yes, it's not the politest of messages, but it explains to newcomers why we've had to deviate from the spec!
With iOS8, Apple's done it again, and added some more classes with this problem, including "Comment". (Apple? Really? Oh, come on, guys! Think before you spam the global space!). This is getting harder to workaround.
Normal solution: Since C/ObjC has no namespaces (sob!), we'd prefix every class. SVG Spec has an official prefix - "SVG" - which we use. For non-spec classes, we have a longer prefix that's probably unique to our open-source project.
But for DOM, we are including our own DOM implementation, and it's possible that a developer's project might have a different, proprietary DOM implementation. Sensible prefixes are hard to come up with here. Apple has already reserved "DOM" as a prefix on Obj-C platforms.
If we took the prefix "SVGKitDOM", which would be the smallest correct prefix name, that triples the length of the classnames from DOM (!), and often makes the code unreadable. It's also against Apple's preferenece of 2-3 letter prefixes.
The project is open-source, so technically: anyone can global-rename the source to anything they want. But this is a huge pain for people to maintain.
I've been thinking of clever macroing workarounds - e.g. #define OPTIONAL_PREFIX DOM / OPTIONAL_PREFIX SVGKitDOM / ..etc that allows users to quickly rebuild the whole DOM and the dependent SVG libraries in one step with whatever prefix they need.
...but this still seems errorprone and messy. And it'll make new commits a knightmare: we'll have to educate every committer in how to use macros in every classname (if that even works with ObjC).
Argh!
There must be an easier way? Namespace conflicts have been a problem for 30+ years now :).
NOTE: This is Objective-C, so it's a superset of C, but the linking process is not a superset - For instance, Apple bans everyone from dynamic linking. So, we need a solution that's static :(.
Perhaps, you could use the not often mentioned #compatability_alias attribute, as follows:
File: PrefixedHeader.h
#interface my_longly_prefixed_ClassThatDoesSomething : NSObject
#end
File: ConvenienceHeader.h
#compatibility_alias ClassThatDoesSomething my_longly_prefixed_ClassThatDoesSomething
If the user has their own proprietary DOM implementation, then have them not import the convenience headers, otherwise do.
Would this work?
Beyond allowing one file to use another file's attributes, what actually happens behind the scenes? Does it just provide the location to access to that file when its contents are later needed, or does it load the implementation's data into memory?
In short;
The header file defines the API for a module. It's a contract listing which methods a third party can call. The module can be considered a black box to third parties.
The implementation implements the module. It is the inside of the black box. As a developer of a module you have to write this, but as a user of a third party module you shouldn't need to know anything about the implementation. The header should contain all the information you need.
Some parts of a header file could be auto generated - the method declarations. This would require you to annotate the implementation as there are likely to be private methods in the implementation which don't form part of the API and don't belong in the header.
Header files sometimes have other information in them; type definitions, constant definitions etc. These belong in the header file, and not in the implementation.
The main reason for a header is to be able to #include it in some other file, so you can use the functions in one file from that other file. The header includes (only) enough to be able to use the functions, not the functions themselves, so (we hope) compiling it is considerably faster.
Maintaining the two separately most results from nobody ever having written an editor that automates the process very well. There's not really a lot of reason they couldn't do so, and a few have even tried to -- but the editors that have done so have never done very well in the market, and the more mainstream editors haven't adopted it.
Well i will try:
Header files are only needed in the preprocessing phase. Once the preprocessor is done with them the compiler never even sees them. Obviously, the target system doesn't need them either for execution (the same way .c files aren't needed).
Instead libraries are executed during the linking phase.If a program is dynamically linked and the target environment doesn't have the necessary libraries, in the right places, with the right versions it won't run.
In C nothing like that is needed since once you compile it you get native code. The header files are copy pasted when u #include it . It is very different from the byte-code you get from java. There's no need for an interpreter(like the JVM): you just feed it your binary stuff to the CPU and it does its thing.
I have a question about parsing performance, when using clang(libclang) and particulary function clang_parseTranslationUnit and clang_reparseTranslationUnit.
I'm trying to optimize the process, but I'm really out of ideas already.
The situation is following - I have a .cpp source, that includes a lot of header files. This headers change very seldom. However the .cpp source changes a lot and I need to reparse it often.
So, there is a possibility to "preparse/precompile" all the headers and create .pch file, and then use it when parsing .cpp.
However, the problem is that, I can use only one .pch.
So, I need to create a .pch from all the included headers.
However, later, when I include some other header file, I need to reparse all the headers, even though they hadn't changed at all.
Also, this is problem, that I need explicitly know, what headers are included in the .cpp (this is not very convenient, as this would mean, I have to scan at least for includes myself, and then create a .pch and then use it, when parsing the .cpp source).
Is there any other option to optimize the process? I hoped that, when I use clang_parseTranslationUnit and later clang_reparseTranslationUnit, the parsing will be optimized in this way actually (at least all the headers, that hadn't changed, do not need to be reparsed again). But, it doesn't work like that.