I have several targets and depending on them I need to import headers. I can do this by defining some preprocessor value and then check it with #ifdef, but I'm wondering, if there any way to do something like:
#if TARGET_NAME = "FirstTarget"
#import "SomeHeader.h"
#endif
u can use like below for example,
//Your current target's are defined
#define FIRST_TARGET "FirstTarget"
#define SECOND_TARGET "SecondTarget"
#define THIRD_TARGET "ThirdTarget"
//set which target u want to run
#define TARGET_NAME FIRST_TARGET //SECOND_TARGET //THIRD_TARGET
//use like below
#if (TARGET_NAME == FIRST_TARGET) //or u can directly specify name instead of defining at the beginning
#import "SomeHeader.h"
//other headers
#elif (TARGET_NAME == SECOND_TARGET)
//header to be in second target
#elif (TARGET_NAME == THIRD_TARGET)
// other header
#endif
Edit:
i got it, it gives error because macros doesn't compare variable length values, for more details see this hear .
so in order to work u can change it like below,
//Your current target's are defined, instead of strings give some constant values
#define FIRST_TARGET 1//#"FirstTarget"
#define SECOND_TARGET 2//#"SecondTarget"
#define THIRD_TARGET 3//#"ThirdTarget"
//set which target u want to run
#define TARGET_NAME FIRST_TARGET //SECOND_TARGET //THIRD_TARGET
//use like below
#if TARGET_NAME == FIRST_TARGET
#import "SomeHeader.h"
//other headers
#elif (TARGET_NAME == SECOND_TARGET)
//header to be in second target
#elif (TARGET_NAME == THIRD_TARGET)
// other header
#endif
Related
I've found that there is a UIKIT_DEFINE_AS_PROPERTIES macro in a lot of places in system's headers like this:
#if UIKIT_DEFINE_AS_PROPERTIES
#property(class, nonatomic, readonly) UIDevice *currentDevice;
#else
+ (UIDevice *)currentDevice;
#endif
But I cannot find the value of it.
what's the use for UIKIT_DEFINE_AS_PROPERTIES and where is it defined?
It is for swift?
UIKIT_DEFINE_AS_PROPERTIES is defined in <UIKit/UIKitDefines.h> as
#if (!defined(SWIFT_CLASS_EXTRA) || (defined(SWIFT_SDK_OVERLAY_UIKIT_EPOCH) && SWIFT_SDK_OVERLAY_UIKIT_EPOCH >= 1))
#define UIKIT_DEFINE_AS_PROPERTIES 1
#else
#define UIKIT_DEFINE_AS_PROPERTIES 0
#endif
The SWIFT_CLASS_EXTRA and SWIFT_SDK_OVERLAY_UIKIT_EPOCH macros
control how Objective-C code is imported into Swift, compare ClangImporter.cpp.
When I am using this method class_getName(), i find it is declared like this:
**OBJC_EXPORT** const char *class_getName(Class cls)
What does OBJC_EXPORT mean?
If you CMD+CLICK on the symbol, you'll see it's not a keyword. OBJC_EXPORT is defined as:
#define OBJC_EXPORT OBJC_EXTERN OBJC_VISIBLE
And OBJC_EXTERN is either extern "C" or extern, depending on if you're compiling C++ or C, respectively.
And OBJC_VISIBLE is either __declspec(dllexport) or __declspec(dllimport) on Windows, or __attribute__((visibility("default")) otherwise.
Basically it's saying you can link that symbol externally.
Here's the full listing:
#if !defined(OBJC_EXTERN)
# if defined(__cplusplus)
# define OBJC_EXTERN extern "C"
# else
# define OBJC_EXTERN extern
# endif
#endif
#if !defined(OBJC_VISIBLE)
# if TARGET_OS_WIN32
# if defined(BUILDING_OBJC)
# define OBJC_VISIBLE __declspec(dllexport)
# else
# define OBJC_VISIBLE __declspec(dllimport)
# endif
# else
# define OBJC_VISIBLE __attribute__((visibility("default")))
# endif
#endif
#if !defined(OBJC_EXPORT)
# define OBJC_EXPORT OBJC_EXTERN OBJC_VISIBLE
#endif
If you have three class named A,B and C respectively and all three classes are unrelated(i mean not in inheritance hierarchy) and you just want to access the value declared in A, then extern is a more appropriate way to go. In this case, you want to declare the variable as extern in ClassA.h, then define it in Class.m. As long as ClassB and ClassC import ClassA.h, they will be able to link against the same extern definition.
Instead of using extern by itself, it's more robust to use OBJC_EXPORT, which is defined in objc-api.h and handles compiling under C++ as well. Here's a code sample:
// ClassA.h
OBJC_EXPORT NSString* commonString;
...
// ClassA.m
NSString* commonString = #"OldValue";
// ClassB.m
#import "ClassA.h"
...
commonString = #"NewValue"; // Can be inside a function or method
Reference : this so post
I've started using CocoaLumberjack and was interested in using their custom log levels feature. They have a handy config available at https://github.com/CocoaLumberjack/CocoaLumberjack/wiki/CustomLogLevels to get you started. I don't know exactly what I want right now, so I left it as is.
Next I set my debug level in my code using this...
static const int ddLogLevel = LOG_LEVEL_DEBUG;
However with this set up it appears only messages of GREATER severity than LOG_LEVEL_DEBUG get presented. Meaning calls to DDLogInfo() and above show up, but not DDLogDebug(). This is also true as I slide upwards. So...
static const int ddLogLevel = LOG_LEVEL_INFO;
...ignores DDLogInfo() and DDLogDebug(), but shows for DDLogNotice() and higher. Surely the expected behavior is to be inclusive of that warning level.
Here's the code deep inside CocoaLumberjack that makes the decision...
for (DDLoggerNode *loggerNode in loggers) {
// skip the loggers that shouldn't write this message based on the logLevel
if (!(logMessage->logFlag & loggerNode.logLevel)) {
continue;
}
dispatch_group_async(loggingGroup, loggerNode->loggerQueue, ^{ #autoreleasepool {
[loggerNode->logger logMessage:logMessage];
}});
}
In my first example, my log level is 111111 (63) and message flag (using DDLogDebug()) is 100000 (32). 63 & 32 is YES, so this fails (with the NOT). So I would think the message would get logged. Moving the log method up to DDLogInfo() which has a message flag of 010000 (16). This is still YES and therefore fails with the NOT, so we get logged. But I see it in this case. Anyone have any experience with this?
I think I've worked it out. In CocoaLumberjack itself, in DDLog.h, the log level flags are defined like so:
#define LOG_FLAG_ERROR (1 << 0) // 0...00001
#define LOG_FLAG_WARN (1 << 1) // 0...00010
#define LOG_FLAG_INFO (1 << 2) // 0...00100
#define LOG_FLAG_DEBUG (1 << 3) // 0...01000
#define LOG_FLAG_VERBOSE (1 << 4) // 0...10000
However, the CustomLogLevels MyLog.h file defines them like so:
#define LOG_FLAG_FATAL (1 << 0) // 0...000001
#define LOG_FLAG_ERROR (1 << 1) // 0...000010
#define LOG_FLAG_WARN (1 << 2) // 0...000100
#define LOG_FLAG_NOTICE (1 << 3) // 0...001000
#define LOG_FLAG_INFO (1 << 4) // 0...010000
#define LOG_FLAG_DEBUG (1 << 5) // 0...100000
Note that it adds an extra FATAL flag, which has the effect of shifting all of the other flags down a place. This is the cause of the issue you were seeing.
if you have custom levels higher than the inbuilt, try initialising with:
[DDLog addLogger: [DDASLLogger sharedInstance] withLogLevel: LOG_LEVEL_CUSTOM];
[DDLog addLogger: [DDTTYLogger sharedInstance] withLogLevel: LOG_LEVEL_CUSTOM];
where LOG_LEVEL_CUSTOM is defined in your MyLog.h as:
#define LOG_LEVEL_CUSTOM (LOG_FLAG_ERROR | LOG_FLAG_WARN | LOG_FLAG_INFO | LOG_FLAG_CUSTOM)
First I define a value.
#import "ViewController.h"
#define SLIDE_TOP 100;
#define SLIDE_BOTTOM -100;
if(distance > SLIDE_TOP){
NSLog(#"TOP");
}
I found same errors
1.
ViewController.m:98:19: Expected ')'
2.
ViewController.m:98:19: If statement has empty body
When you #define something, the preprocessor simply substitutes replacement tokens (everything after the identifier) for the identifier in the source code. So, your if statement looks like this after the preprocessor runs:
if (distance > 100;) {
NSLog(#"TOP");
}
Note that the semicolon after "100" is included. The compiler doesn't expect the statement to end there, because there's an unmatched open parenthesis, so it complains that you're missing a ')'. The fix is to remove the semicolon from the end of the #define statement:
#define SLIDE_TOP 100
Change these
#define SLIDE_TOP 100;
#define SLIDE_BOTTOM -100;
to
#define SLIDE_TOP 100
#define SLIDE_BOTTOM -100
; is not required in define.
I have a large enum (for the sake of transparency 63 values), and I am now creating a NS_Options bitflag based on that enum. Is there a way that I can write this so that it will be flexible?
The main concerns I have with hardcoding it are:
If I add/remove an enum, I will have to manually add/remove it in my bitflag.
There is a lot of typing to generate these.
My .h file is getting intensely long (because I like to use whitespace and adequate comments)
The only solution I've come up with thus far is:
#define FlagForEnum(enum) 1 << enum
typedef NS_ENUM(NSInteger, ExampleEnum)
{
Value1,
Value2,
...
ValueN
}
typedef NS_OPTIONS(NSNumber, ExampleEnumFlags)
{
Value1Flag = FlagForEnum(Value1),
Value2Flag = FlagForEnum(Value2),
...
ValueNFlag = FlagForEnum(ValueN)
}
This is a barely adequate solution when I remove an enum (at least I get a compile error), and if the enum ordering gets changed, the flags' bitshifted position changes too (not that it truly matters, but it seems comforting). But it doesn't solve the 'this-is-a-lot-of-typing' problem, or the 'what-if-I-forget-to-add-a-flag' problem.
You can use a technique called X Macro
#define VALUES \
VALUE_LINE(Value1) \
VALUE_LINE(Value2) \
VALUE_LINE(Value3)
typedef NS_ENUM(NSUInteger, ExampleEnum)
{
#define VALUE_LINE(x) x,
VALUES
#undef VALUE_LINE
}
typedef NS_OPTIONS(NSUInteger, ExampleEnumFlags)
{
#define VALUE_LINE(x) x##Flag = 1 << x,
VALUES
#undef VALUE_LINE
}
Here is a slightly better (in terms of less typing) preprocessor #define solution. Although this still isn't as elegant as I'd like.
#define BitShift(ENUM_ATTRIBUTE) (1 << ENUM_ATTRIBUTE)
#define CreateEnumFlag(ENUM_ATTRIBUTE) ENUM_ATTRIBUTE##Flag = BitShift(ENUM_ATTRIBUTE)
typedef NS_ENUM(NSUInteger, ExampleEnum)
{
Value1,
Value2,
...
ValueN
}
typedef NS_Options(NSUInteger, ExampleEnumFlags)
{
CreateEnumFlag(Value1),
CreateEnumFlag(Value2),
...
CreateEnumFlag(ValueN)
}
This will create flags of the form Value1Flag, Value2Flag, ..., ValueNFlag.