Bitmasking in Objective C - ios

I'd like to learn bit masking. As far as I understand, it is means to store binary values of certain type into one variable.
If the above assumption is true, I figured I could do something like this:
typedef NSUInteger Traits;
enum
{
TraitsCharacterHonest = 0,
TraitsCharacterOptimistic = 1,
TraitsCharacterPolite = 4,
TraitsCharacterDevious = 8,
TraitsPhysicalTall = 16,
TraitsPhysicalBeautiful = 32,
TraitsPhysicalFat = 64,
TraitsPhysicalBigEyes = 128,
TraitsPhysicalRedHair = 256,
};
#import <Foundation/Foundation.h>
#interface Person : NSObject
#property (strong, nonatomic) NSString *name;
#property (assign, nonatomic) Traits *traits;
#end
Question 1 is, how do I assign more traits to one person?
Question 2 is, do I have to put ever increasing numbers to enum items, or is there a way to indicate this?
Ultimately I want to achieve something like this:
Person *john = [[Person alloc] init];
//here code that assigns john three traits: TraitsCharacterHonest,
//TraitsCharacterOptimistic and TraitsPhysicalBeautiful.
If I understand it correctly, the value of
john.traits should be 100011., reading from right and each place representing that particular enum value / trait..and 0 meaning not having it and 1 meaning having it.
Can you please advice on syntax and explain a particular aspect if needed?

I'd recommend changing a few things:
The enum values can be changed to be a one left-shifted. Makes it a little easier to write, in my opinion.
You don't need to typedef to NSUInteger, you can declare a enum type directly using typedef enum.
And, as other people have mentioned, your property shouldn't be a pointer to a Traits type.
My code would look like this:
typedef enum
{
TraitsCharacterHonest = 1 << 0,
TraitsCharacterOptimistic = 1 << 1,
TraitsCharacterPolite = 1 << 2,
TraitsCharacterDevious = 1 << 3,
TraitsPhysicalTall = 1 << 4,
TraitsPhysicalBeautiful = 1 << 5,
TraitsPhysicalFat = 1 << 6,
TraitsPhysicalBigEyes = 1 << 7,
TraitsPhysicalRedHair = 1 << 8
} Traits;
#import <Foundation/Foundation.h>
#interface Person : NSObject
#property (strong, nonatomic) NSString *name;
#property (assign, nonatomic) Traits traits;
#end
Setting John's traits will look like this:
Person *john = [[Person alloc] init];
john.traits = TraitsCharacterHonest | TraitsCharacterOptimistic | TraitsPhysicalBeautiful;
However, while bit-fields are useful to learn, but they're a real pain to debug. If you want to go and print
this character's traits now, you'll have to write code like this:
NSMutableString *result = [NSMutableString string];
if (self.traits & TraitsCharacterHonest)
{
[result appendString: #"Honest, "];
}
if (self.traits & TraitsCharacterOptimistic)
{
[result appendString: #"Optimistic, "];
}
if (self.traits & TraitsCharacterPolite)
{
[result appendString: #"Polite, "];
}
// etc...
Additionally, syntax for operations like removing a trait are confusing. You'll have to use & and a NOT-ed constant,
// remove 'Tall' trait
john.traits = john.traits & ~TraitsPhysicalTall
If you can (and performance isn't too much of a issue), I'd prefer using a higher-level feature. Perhaps an NSSet with string constants? e.g.
__unused static NSString *TraitsCharacterHonest = #"TraitsCharacterHonest";
__unused static NSString *TraitsCharacterOptimistic = #"TraitsCharacterOptimistic";
__unused static NSString *TraitsCharacterPolite = #"TraitsCharacterPolite";
// etc...
#interface Person : NSObject
#property (strong, nonatomic) NSString *name;
#property (assign, nonatomic) NSMutableSet *traits;
#end
Then you can do:
// adding
[john.traits addObject: TraitsCharacterHonest];
// checking
[john.traits containsObject: TraitsCharacterHonest];
// removing
[john.traits removeObject: TraitsCharacterHonest];
Makes more sense to me. What's more, you can print the description of the traits directly with
NSLog(#"John's traits: %#", john.traits);
and you'll get reasonable output.

One issue that you can run into is that using bit masks to indicate membership within sets can be capped by the number of bits in the underlying data type. For instance an unsigned long of 32 bits has room only for 32 disjoint or different members. If you need to add a 33rd, you are out of luck unless you go to a 64 bit unsigned integer.
One workaround for this is to use an array of bytes. With this approach you have to specify your bit membership as two pieces of data, the offset to the byte and the bit mask to use for the specific bit.
I have also seen people use byte arrays for single membership so that rather than one bit used, the entire byte is used. It can be a waste of memory but then it may be that it is more flexible and useful and the amount of memory wasted is not a problem.
For using an array of bytes to hold the set of bits, you might consider using an unsigned long to represent the members of the set in which the least significant byte is the bit mask and the rest of the bytes are used as an unsigned 3 byte offset into the byte array. You would then do something like the following:
int getBitSet (unsigned char *bArray, unsigned long ulItem)
{
unsigned long ulByteOffset = ((ulItem >> 8) & 0x00ffffff);
unsigned char ucByteMask = (ulItem & 0x000000ff);
return (*(bArray + ulByteOffset) & ucByteMask);
}
int setBitSet (unsigned char *bArray, unsigned long ulItem, unsigned long ulNewValue)
{
unsigned char oldValue;
unsigned long ulByteOffset = ((ulItem >> 8) & 0x00ffffff);
unsigned char ucByteMask = (ulItem & 0x000000ff);
oldValue = *(bArray + ulByteOffset) & ucByteMask;
if (ulNewValue) {
*(bArray + ulByteOffset) |= ucByteMask; // set bit
} else {
*(bArray + ulByteOffset) &= ~ucByteMask; // clear bit
}
return oldValue;
}
You could then have a set of functions to get and set the bytes or you could use macros. With C++ you can create your own class for this functionality and provide various types of logical operations as well so that you can create sets of various kinds and then perform logical operations on the sets.

In iOS 6 or above, Mac OS X 10.8 and above
You can do:
typedef NS_OPTIONS(NSUInteger, Traits) {
TraitsCharacterHonest,
TraitsCharacterOptimistic,
TraitsCharacterPolite,
TraitsCharacterDevious,
TraitsPhysicalTall,
TraitsPhysicalBeautiful,
TraitsPhysicalFat,
TraitsPhysicalBigEyes,
TraitsPhysicalRedHair
};
For more info, refer to http://nshipster.com/ns_enum-ns_options/

Your major issue here is making traits a pointer. Drop the pointer, and do it like you would in C:
john.traits |= TraitsCharacterOptimistic | TraitsCharacterOptimistic | TraitsCharacterOptimistic;
Remember that you only need pointers in a couple of situations in Objective-C:
When you are dealing with actual objects (derived from NSObject)
When you need to pass a primitive by reference (an int * argument to a function to return count), in which case you take the adress of a local variable, and that pointer is not saved by the function.
When you need an array of primitive types, dynamically allocated on the heap (e.g. using malloc & friends).
Otherwise, just use a stack-allocated primitive type, as you can do a lot of things with it.

First of all change:
...
typedef NSUInteger Traits;
enum
{
TraitsCharacterHonest = 0, //cann't be a 0
...
};
...
#property (assign, nonatomic) Traits *traits; //you no need create a pointer to a primitive type
to:
...
typedef NSUInteger Traits;
enum
{
TraitsCharacterHonest = 1,
...
};
...
#property (assign, nonatomic) Traits traits;
For assigning you should do follow:
john.traits |= TraitsCharacterHonest | TraitsCharacterDevious;
Bitwise operations in ObjC are the same like as in C language.
Check this tutorial Bitwise Operators in C and C++: A Tutorial

Assuming:
1 << 8 is the same what 100000000:
john.traits = TraitsCharacterHonest | TraitsCharacterOptimistic | TraitsPhysicalBeautiful;
is exactly the same as:
john.traits = 000000001 | 000000010 | 000100000;
and the result is:
john.traits = 000100011
Now, when you want to check conditional:
if (self.traits & TraitsCharacterHonest) { ... }
it is equivalent to:
if (000100011 & 000000001) { ... }
and result of that is:
if (000000001) { ... }
And, this is actually 1, not zero value is true, so the whole conditional is true. Enjoy:-)

Related

Why does this hardcoded string not get deallocated, while the alloc/inited one does? [duplicate]

This question already has answers here:
Weak NSString variable is not nil after setting the only strong reference to nil
(3 answers)
Closed 6 years ago.
I think the result under the second example below should be "null", but after I run the code, I found that in Example 1 the output is "Strong-String" and in Example 2 it's "null"
I really don't understand why this is.
Examples:
// property definition
#property (nonatomic, strong) NSString *strongStr;
#property (nonatomic, weak) NSString *weakString;
// Sample 1
self.strongStr = #"Strong-String";
self.weakString = self.strongStr;
self.strongStr = nil;
// output -> Strong-String
NSLog(#"waekstring = %#", self.weakString);
// Sample 2
self.strongStr = [[NSString alloc] initWithUTF8String:"Strong-String"];
self.weakString = self.strongStr;
self.strongStr = nil;
// output -> null
NSLog(#"waekstring = %#", self.weakString);
when you define a string like
// Sample 1
self.strongStr = #"Strong-String";
the compiler actually keeps a static reference to that string, if you were to create another string with the same characters, you would notice the memory address of both the strings would be the same. The compiler does this to help save memory when the same string is used over and over so it doesnt have to reallocated the memory every time. this is why the string does not get deallocated, while the manually alloc/inited one does

Is there a way to Make NSInvocation surport variable parmas function line [NSstring stringWithFormat:..]

Apple doc says "NSInvocation does not support invocations of methods with either variable numbers of arguments or union arguments. "
i searched for hours ,some people says var_list works, but i tryed ,it does Not
but I think there may be a way to do the same thing (reflection) on variable params function,as i metioned ,[stringWithFormat:],
so , I found a way ,please readt the code bellow .
#interface INTObj : NSObject
#property (nonatomic, assign) int realvalue
#end
#interface FloatObj : NSObject
#property (nonatomic, assign) float realvalue
#end
// here ,the selectorName is only know in runtime
NSString *selectorName = #"stringWithFormat:";
SEL selector = NSSelectorFromString(selectorName);
typedef id (*Obj_Imp)(id,SEL,...);
Method md = class_getClassMethod(obj, selector); // here, obj means NSString
Obj_Imp fun = (Obj_Imp )method_getImplementation(md); // stringWithFormat:...
NSString *arg1 = #"hello %f %d";
FloatObj *fo = [[FloatObj alloc] init];
fo.realvalue = 11.3;
INTObj *io = [[INTObj alloc] init];
io.realvalue = 5;
NSObject *arr[3] = {arg1, fo,io};
// it cracks . exc_bad_Access code = 1
NSString *result = fun(obj , selector, arr[0], [arr[1] realvalue],[arr[2] realvalue]) ;
// it works but i cant do this ,because i don't know the type of the 4th parameters at Runtime
NSString *result = fun(obj , selector, arr[0],[(FloatObj *)arr[1] realvalue],[arr[2] realvalue])
why does the second calling of the function "fun" works while the first one cracks?
is there a better way to to do this?
This has nothing to do with NSInvocation or calling the method implementation directly. You should get the same undefined behavior if you called the method directly:
NSString *result = [NSString stringWithFormat:arr[0], [arr[1] realvalue], [arr[2] realvalue]];
or even a regular function:
NSLog(arr[0], [arr[1] realvalue], [arr[2] realvalue]);
In C (and Objective-C) every expression must have a compile-time type. The compiler needs to be know this compile-time type be able to compile the code correctly.
So let me ask you, what should be the compile-time type of [arr[1] realvalue]? Is it int? Is it float? The compiler will do different things depending on what type it is. If it is float for example, the C standard says that float passed to varargs will be promoted to double. The calling conventions for passing an int and a double in varargs are different (in fact, these two types have different sizes). And +[NSString stringWithFormat:] expects the compile-time type of the thing you pass to match the format specifier you give it in your format string, or there will be undefined behavior.
From your format string, it seems like you wanted the [arr[1] realvalue] argument to be float or double since you used %f. Since FloatObj is the class whose realvalue method returns float, it seems that casting arr[1] to FloatObj * is the right thing to do.

BOOL property from a calculation returns NSNumber with incorect value using valueForKey:

I have a simple object which has one NSNumber which is used to store some flags.
I have a conienience getter method which in fact does:
[self.flags integerValue] & SomeConstantFlag
for a property#property (readonly, nonatomic, assign) BOOL someConstantFlag
and this works fine when accesing the underlying bool value like
model.someConstantFlag
but when I try to
id value = [model valueForKey:#"someConstantFlag"];
Then it returns a bad boolean representation e.g. NSNumber with value 2, 4 etc.
Why is this happening when the declaration of the property is BOOL? Is there a "Pretty" way to overcome this issue?
Wrapping on the other hand works ok:
BOOL someBool = 42;
NSNumber* numberVal = #(someBool);
//The underlying is an __NSCFBoolean with the proper 0/1 val!
valueForKey always returns an Objective-C object, even if the property has scalar type.
From the documentation (emphasis mine):
The default implementations of valueForKey: and setValue:forKey:
provide support for automatic object wrapping of the non-object data
types, both scalars and structs.
Once valueForKey: has determined the specific accessor method or
instance variable that is used to supply the value for the specified
key, it examines the return type or the data type. If the value to be
returned is not an object, an NSNumber or NSValue object is created
for that value and returned in its place.
The return value of your method is BOOL, which is defined as
typedef signed char BOOL;
on OS X and on the 32-bit iOS platform. So what valueForKey returns is a NSNumber
containing the result of
signed char val = [self.flags integerValue] & SomeConstantFlag;
and that can be in the range -128 .. 127.
To ensure that you get only YES or NO (aka 1 or 0) write your custom getter as:
-(BOOL)someConstantFlag
{
return ([self.flags integerValue] & SomeConstantFlag) != 0;
}
Remark: On the 64-bit iOS platform (but not on 64-bit OS X), BOOL is defined as the C99 _Bool, which is a "proper" boolean type and can take only the value 0 or 1.
NSNumber *value = #([model objectForKey:#"someConstantFlag"]);
BOOL boolVal = [value boolValue];
I think you should consider the following problems. Firstly, integerValue returns NSInteger which means if you support 64Bit architecture it will return int_64 not int_32, what is more in your code here
[self.flags integerValue] & SomeConstantFlag
this does the following if flags is 00010 and somConstantFlags is 00001 the & of those will do something you probably does not expect because you will get value of 00000 which equals 0 or if they are 00011 and 00110 you will get 00010 which equals 2. So that is why when you call valueForKey you get 2 or 4 or something else depending on your flags :)
What is more in objective-C everything different then 0 is YES.
Try reconsidering your bit logic :). See The following example
enum
{
kWhite = 0,
kBlue = 1 << 0,
kRed = 1 << 1,
kYellow = 1 << 2,
kBrown = 1 << 3,
};
typedef char ColorType;
and in your setter check the following
ColorType pinkColor = kWhite | kRed;
if (pinkColor & (kWhite | kBlue | kRed | kYellow)) {
// any of the flags has been set
}
The flags kWhite, kBlue, kRed and kYellow have been set.
However, kBrown has not been set.

Create enum values default to NSInteger

I understand that when we declare enum like the ones below, the values are default to type "int"
enum{
category0,
category1
};
However, I am getting issues now that iOS supports 64-bit. The solution I am thinking to prevent changing a lot in my code is to make this enum values default to "NSInteger" instead of "int". On my understanding, the system will decide whether NSInteger will be of type int or long depending if it's running on 32 or 64 bit.
I am having difficulty understanding enum so I will appreciate your help on this.
Thanks
If I use typedef as suggested by the comments:
typedef NS_ENUM(NSInteger, Category){
category0,
category1
}
how should i use it? Normally when I compare it with, say, tableview's indexpath.section, i do this
if(indexpath.section == category0){
...
}
if I declare that "Category", do I need to use it? Sorry I don't quite understand typedef.
Try
typedef enum{
category0,
category1
} Category;
or
typedef NS_ENUM(NSInteger, Category) {
category0,
category1
};
You also can explicitly set integer numbers to your enums values
typedef NS_ENUM(NSInteger, Category) {
category0 = 0,
category1 = 1,
category42 = 42
};
Then you can use them the same as int
if(indexpath.section == category0){
...
}

What is the difference between NS_ENUM and NS_OPTIONS?

I preprocessed following code with clang in Xcode5.
typedef NS_ENUM(NSInteger, MyStyle) {
MyStyleDefault,
MyStyleCustom
};
typedef NS_OPTIONS(NSInteger, MyOption) {
MyOption1 = 1 << 0,
MyOption2 = 1 << 1,
};
And got this.
typedef enum MyStyle : NSInteger MyStyle; enum MyStyle : NSInteger {
MyStyleDefault,
MyStyleCustom
};
typedef enum MyOption : NSInteger MyOption; enum MyOption : NSInteger {
MyOption1 = 1 << 0,
MyOption2 = 1 << 1,
};
I know NS_OPTIONS is for a bitmask, but is there any technical differences?
Or this is just for naming convention?
EDIT
According to the definition of NS_OPTIONS, it's probably for compiler compatibility.(especially for c++ compiler)
// In CFAvailability.h
// Enums and Options
#if (__cplusplus && __cplusplus >= 201103L && (__has_extension(cxx_strong_enums) || __has_feature(objc_fixed_enum))) || (!__cplusplus && __has_feature(objc_fixed_enum))
#define CF_ENUM(_type, _name) enum _name : _type _name; enum _name : _type
#if (__cplusplus)
#define CF_OPTIONS(_type, _name) _type _name; enum : _type
#else
#define CF_OPTIONS(_type, _name) enum _name : _type _name; enum _name : _type
#endif
#else
#define CF_ENUM(_type, _name) _type _name; enum
#define CF_OPTIONS(_type, _name) _type _name; enum
#endif
__cplusplus value in clang is 199711 and I can't test what this is exactly for, though.
There's a basic difference between an enum and a bitmask (option). You use an enum to list exclusive states. A bitmask is used when several properties can apply at the same time.
In both cases you use integers, but you look at them differently. With an enum you look at the numerical value, with bitmasks you look at the individual bits.
typedef NS_ENUM(NSInteger, MyStyle) {
MyStyleDefault,
MyStyleCustom
};
Will only represent two states. You can simply check it by testing for equality.
switch (style){
case MyStyleDefault:
// int is 0
break;
case MyStyleCustom:
// int is 1
break;
}
While the bitmask will represent more states. You check for the individual bits with logic or bitwise operators.
typedef NS_OPTIONS(NSInteger, MyOption) {
MyOption1 = 1 << 0, // bits: 0001
MyOption2 = 1 << 1, // bits: 0010
};
if (option & MyOption1){ // last bit is 1
// bits are 0001 or 0011
}
if (option & MyOption2){ // second to last bit is 1
// bits are 0010 or 0011
}
if ((option & MyOption1) && (option & MyOption2)){ // last two bits are 1
// bits are 0011
}
tl;dr An enum gives names to numbers. A bitmask gives names to bits.
The only major difference is that using the appropriate macro allows Code Sense (Xcode's code completion) to do type checking and code completion better. For example, NS_OPTIONS allows the compiler to make sure all the enums you | together are of the same type.
For further reading see: http://nshipster.com/ns_enum-ns_options/
Edit:
Now that Swift is coming, using NS_ENUM/OPTIONS is highly recommended so that the enum can be correctly bridged to a swift enum.
The only difference is to let developers using the values know if it makes sense to use them in an OR'ed bitmask.
The compiler doesn't care which one you use though :)
I copied my answer from this question Objective-C Enumeration, NS_ENUM & NS_OPTIONS:
Since the user who add that question hasn't been active for a long time, maybe you can suggest my answers for people who search and find here.
BELOW IS THE ANSWER COPIED:
There is a difference between the two except that they infer different kind of enumerations.
When compiled in Objective-C++ mode, they generate different code:
this is the original code:
typedef NS_OPTIONS(NSUInteger, MyOptionType) {
MyOptionType1 = 1 << 0,
MyOptionType2 = 1 << 1,
};
typedef NS_ENUM(NSUInteger, MyEnumType) {
MyEnumType1 = 1 << 0,
MyEnumType2 = 1 << 1,
};
this is the code when macros are expanded in Objective-C compiling:
typedef enum MyOptionType : NSUInteger MyOptionType; enum MyOptionType : NSUInteger {
MyOptionType1 = 1 << 0,
MyOptionType2 = 1 << 1,
};
typedef enum MyEnumType : NSUInteger MyEnumType; enum MyEnumType : NSUInteger {
MyEnumType1 = 1 << 0,
MyEnumType2 = 1 << 1,
};
this is the code when macros are expanded in Objective-C++ compiling:
typedef NSUInteger MyOptionType; enum : NSUInteger {
MyOptionType1 = 1 << 0,
MyOptionType2 = 1 << 1,
};
typedef enum MyEnumType : NSUInteger MyEnumType; enum MyEnumType : NSUInteger {
MyEnumType1 = 1 << 0,
MyEnumType2 = 1 << 1,
};
See the difference of NS_OPTIONS between two modes?
HERE IS THE REASON:
There is a new feature in C++ 11, you can declare a type for you enumeration, before that, the type holding enumeration is decided by compiler according to the largest value of enumerations.
So in C++ 11, since you can decide the size of your enumeration by yourself, you could forward declare enums without actually define them, like this:
//forward declare MyEnumType
enum MyEnumType: NSInteger
//use myEnumType
enum MyEnumType aVar;
//actually define MyEnumType somewhere else
enum MyEnumType: NSInteger {
MyEnumType1 = 1 << 1,
MyEnumType2 = 1 << 2,
}
This feature is handy, and Objective-C imports this feature , but it brings a problem, when doing bitwise calculation, like this:
enum MyEnumType aVar = MyEnumType1 | MyEnumType2;
This code can't compile in C++/Objective-C++ compiling, since aVar is considered of type NSInteger but MyEnumType1 | MyEnumType2 is of type MyEnumType, this assignment can't perform without a type cast, C++ forbids implicit type casting.
At this time, we need NS_OPTIONS, NS_OPTIONS fall back to enum before C++ 11, so that there is no MyEnumType indeed, MyEnumType is just another name for NSInteger, so that code like
enum MyEnumType aVar = MyEnumType1 | MyEnumType2;
will compile, since it is assigning NSInteger to NSInteger.

Resources