Signed char literal for -1 - c++17

I want to assign a special value (all bits set) to an int8_t variable. However, I'm getting
enumeration value is outside the range of its underlying type ("signed char").
int8_t value = 0b11111111;
So how should I set the all bits set value? Should I move to uint8_t?

You could simply assign -1 or use ~0 and accept the implicit cast from int:
int8_t value = -1 ;
or
int8_t value = ~0 ;
Equally you can make it explicit:
int8_t value = static_cast<int8_t>(-1) ;
or
int8_t value = static_cast<int8_t>(~0) ;
Which may be required if you are applying a static analysis tool.
You could of course apply the cast to your binary initialiser, but the advantage of the above is that it works for any integer type regardless of width without the need to count bits in the initialiser.

Related

Get float value from NSData bytes

how can I write this
float value = *(float *)[data bytes];
in swift?
Thanks.
The corresponding Swift code is
let value = UnsafePointer<Float>(data.bytes).memory
which – as your Objective-C code – assumes that the NSData
objects has (at least) 4 bytes, representing a floating point value
in host byte order.
UnsafePointer<Float>(..) corresponds to the (float *) cast.
.memory corresponds to the dereferencing operator *.
An alternative is
var value : Float = 0
data.getBytes(&value, length: sizeofValue(value))

How to convert switch state into integer in ios

I am using five switches for handling different types of notifications. To remember the state of the switch, I am thinking of converting state of five switches into an integer. For example, if my switches status is as follow, 01010 then the integer should be 10. Please help me how to achieve this.
At first extract each switch value and store it in a single string
Now convert the string to decimal /integer value like this:-
NSString * binarystring = #"01010";
long decimalValue = strtol([binarystring UTF8String], NULL, 2);
NSLog(#"%ld", decimalValue );
Edit
Get all switch control value in single string:-
NSString *binarystring = [[NSString alloc] initWithFormat:#"%i%i%i%i%i",self.switch1.isOn,self.switch2.isOn,self.switch3.isOn,self.switch4.isOn,self.switch5.isOn];
(Why bother encoding your 5 switch values into a single integer? Storing 5 Booleans is not hard. That said the question is how to do it...)
Important aside: BOOL values are not 0 and 1
Objective-C is a superset of C, and in the original C there was no Boolean type - instead it just used an integer type with the interpretation that 0 was false and anything else was true.
Objective-C defines BOOL as a signed char, that is an 8-bit signed integer type (as characters are just an integer type in C). So in Objective-C 0 is false, and -128..-1, 1..127 are all true. NO is defined as 0 and YES as 1, but various operations may result in other values.
To get a 0 or 1 from a BOOL b you can use the conditional operator:
b ? 1 : 0
However the built in logical operators by definition will always return 0 or 1 and never any of the other possible values. The ! operator is logical not, and two not's get you back to where you started so:
!!b
will also give you a 0 or 1.
In any code that takes a BOOL and tries to use it as a 0 or 1 you should really use one of the above (or an equivalent).
One way to solve it: using strings
Your question has been interpreted as using a string as an intermediary during the encoding. First assume the class has your five buttons stored in an instance variable as a simple array (it will allow us to loop):
const int kSWITCH_COUNT = 5; // let's not hard code it everywhere
#implemention MyClass
{
Switch *switches[kSWITCH_COUNT];
}
then the string method goes something like:
- (void) stringMethod
{
NSMutableString *binarystring = NSMutableString.new;
// build up the string one value at a time, note the !! so we only get 0 or 1 values
for (int ix = 0; ix < kSWITCH_COUNT; ix++)
[binarystring appendFormat:#"%d", !!switches[ix].isOn];
long decimalValue = strtol([binarystring UTF8String], NULL, 2);
NSLog(#"Encoded: 0x%lx", decimalValue);
}
This method works, but it is rather a circuitous way of getting to the result - you have 5 integer (Boolean) values and you want to combine them into an integer, why involve strings?
A better way to solve it: using integers
(Objective-)C provides bitwise operators to do shifts, or, and, etc. operations which treat integer types as an ordered collection of bits - which is what they are on a computer.
The << operator shifts left, e.g. 0x1 << 1 produces 0x2, i.e. << 1 is equivalent to multiplication by 2. The | operator is bitwise or, e.g. 0x1 << 1 | 1produces0x3`. The answer to your question now follows easily:
- (void) shiftMethod
{
unsigned int encoded = 0;
for (int ix = 0; ix < kSWITCH_COUNT; ix++)
encoded = (encoded << 1) | !!switches[ix].isOn;
NSLog(#"Encoded: 0x%x", encoded);
}
If you don't like shifts and ors you can use multiplication and addition:
encoded = encoded * 2 + !!switches[ix].isOn;
The above solves the problem directly, no converting to/from intermediate strings. It happens to be a lot faster as well, but in the overall scheme of an application neither approach is probably going to take a significant proportion of the execution time and you shouldn't select based on that.
A Third Way
If you are going to wish to set/get the individual bits of an integer a lot you can use struct types with bit-field widths. These let you set/get the bits of an integer directly - no shifting etc. required - and you may find them useful, but they are rather "low level". Any good book on C will show you how to use these.
HTH

BOOL property from a calculation returns NSNumber with incorect value using valueForKey:

I have a simple object which has one NSNumber which is used to store some flags.
I have a conienience getter method which in fact does:
[self.flags integerValue] & SomeConstantFlag
for a property#property (readonly, nonatomic, assign) BOOL someConstantFlag
and this works fine when accesing the underlying bool value like
model.someConstantFlag
but when I try to
id value = [model valueForKey:#"someConstantFlag"];
Then it returns a bad boolean representation e.g. NSNumber with value 2, 4 etc.
Why is this happening when the declaration of the property is BOOL? Is there a "Pretty" way to overcome this issue?
Wrapping on the other hand works ok:
BOOL someBool = 42;
NSNumber* numberVal = #(someBool);
//The underlying is an __NSCFBoolean with the proper 0/1 val!
valueForKey always returns an Objective-C object, even if the property has scalar type.
From the documentation (emphasis mine):
The default implementations of valueForKey: and setValue:forKey:
provide support for automatic object wrapping of the non-object data
types, both scalars and structs.
Once valueForKey: has determined the specific accessor method or
instance variable that is used to supply the value for the specified
key, it examines the return type or the data type. If the value to be
returned is not an object, an NSNumber or NSValue object is created
for that value and returned in its place.
The return value of your method is BOOL, which is defined as
typedef signed char BOOL;
on OS X and on the 32-bit iOS platform. So what valueForKey returns is a NSNumber
containing the result of
signed char val = [self.flags integerValue] & SomeConstantFlag;
and that can be in the range -128 .. 127.
To ensure that you get only YES or NO (aka 1 or 0) write your custom getter as:
-(BOOL)someConstantFlag
{
return ([self.flags integerValue] & SomeConstantFlag) != 0;
}
Remark: On the 64-bit iOS platform (but not on 64-bit OS X), BOOL is defined as the C99 _Bool, which is a "proper" boolean type and can take only the value 0 or 1.
NSNumber *value = #([model objectForKey:#"someConstantFlag"]);
BOOL boolVal = [value boolValue];
I think you should consider the following problems. Firstly, integerValue returns NSInteger which means if you support 64Bit architecture it will return int_64 not int_32, what is more in your code here
[self.flags integerValue] & SomeConstantFlag
this does the following if flags is 00010 and somConstantFlags is 00001 the & of those will do something you probably does not expect because you will get value of 00000 which equals 0 or if they are 00011 and 00110 you will get 00010 which equals 2. So that is why when you call valueForKey you get 2 or 4 or something else depending on your flags :)
What is more in objective-C everything different then 0 is YES.
Try reconsidering your bit logic :). See The following example
enum
{
kWhite = 0,
kBlue = 1 << 0,
kRed = 1 << 1,
kYellow = 1 << 2,
kBrown = 1 << 3,
};
typedef char ColorType;
and in your setter check the following
ColorType pinkColor = kWhite | kRed;
if (pinkColor & (kWhite | kBlue | kRed | kYellow)) {
// any of the flags has been set
}
The flags kWhite, kBlue, kRed and kYellow have been set.
However, kBrown has not been set.

Setting Integer value in Objective c

I have recently started programming in iOS.. I am going through a code snippet that declares the following variables:
int rc = 0X00;
sqlite3_stmt *pStmt = 0X00;
FMStatement *stat = 0X00;
BOOL abc = 0X00;
what does this mean?? I read somewhere that setting 0X00 in a reference variable means setting it to NULL (in C). But what does setting a BOOL type variable and an int type variable to 0X00 mean??
I suggest you read up about the basics of programming languages, specifically, C programing with pointers. Objective-C is a superset of C and follows many similar rules.
But to your question:
The 0x in front of the literal values in the code (0x00) specifies that the value is interpreted as hexadecimal rather than decimal. But 0x00(hex) is the same as 0(dec).
int rc = 0x00; //same as int rc = 0;
int is a primitive type in both Obj-C and C that specifies an integer, effectively you are initializing the variable. In the C language you must initialize variables otherwise they could be pointing at a random piece of memory.
Therefore, examine this code:
int a;
int b = 0;
//a is NOT equal to b!
In C, the variable 'a' has not be initialized and therefore its not typically safe to assume that it will be initialized to 0. Always initialize your variable.
If you did a printf, or an NSLog of the variable 'a' you will see that it prints some huge number and it doesnt make sense (sometimes this is compiler dependent)
The same can be said for a BOOL. Although setting a BOOL to 0 is the same as setting it to false;
BOOL flag = 0; //The same as saying BOOL flag = false;
Now for the final part of your code:
FMStatement *stat = 0X00;
Often in Objective-C if you are dealing with pointers and objects you need to initialise the pointer to point at some memory address. The actual memory address is usually determined by the stack/heap and you don't need to worry about that. But you do need to ensure that the pointer isn't pointing to the wrong location (known as a garbage pointer).
To do this, we simply set our pointer to nil. eg:
FMStatement *stat = nil; //This pointer is now safe. Although memory still hasnt been allocated for it yet
This is usually taken care of for you though when you immediately allocate the memory for an object, therefore in this case you don't need to worry about initializing the pointer to nil:
FMStatement *stat = [[FMStatement alloc]init];
Like I said, I recommend you read about basic C programming, allocations, pointers, datatypes, initialising etc, once you have a grasp of this, then move to Objective-C which then builds ontop of it with Object-Oriented stuff.
Good luck.
0X00 is simply 0 in hexadecimal notation. So,
int rc = 0X00;
is the same as
int rc = 0;
Same for BOOL variables, where 0 is the same as NO. Using 0X00 is odd -- it'd make more sense to use 0 or NO where appropriate, and use nil for the pointers.

Why the need to use enif_compare?

Why does
int enif_compare(ERL_NIF_TERM lhs, ERL_NIF_TERM rhs)
need to be used instead of just
if( lhs == rhs ) return 1;
I believe it matters that I am comparing atoms.
ERL_NIF_TERM is an opaque datatype and, to the best of my knowledge, is more akin to a pointer than a value. In fact, here's the definition: typedef unsigned long ERL_NIF_TERM (technically there are a few variants, but they're all integers with the same size as a memory address on the system)
So, you must use enif_compare for the same reason you must use str_cmp when comparing C strings: the referenced values may be identical, but the values you get are not representative of that.
Consider:
char a[] = "test";
char b[] = "test";
printf("%d\n", a == b);
Logically, you and I know that the strings are identical, but the values a and b are actually pointers to the contained value. So when you do a regular compare (==), it's comparing the pointers, not the underlying value. Since they are distinct values within the function, they are allocated to different memory addresses, and as a result, a != b, but str_cmp(a, b) == true

Resources