I am having a hard time finding the number of rows clicked. What I found was
(void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath {
But I got a crazy number from that. I have an UITableView that shows an array with 4 items.
I also tried this:
NSIndexPath *leo = [self.tableView indexPathForSelectedRow];
NSInteger *leo2 = leo.row;
But leo2 is always 0. What can I do?
You have the two parts, you just need to put them together. Is this what you're looking for?
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath
{
NSInteger leo2 = indexPath.row;
id someObject = myArray[leo2];
}
As #nkongara points out, you had added an asterisk next to the name of the pointer implying that it was an Objective C object, when infact it is a primitive type. In this case either long or int depending on the system. For more information, see how NSInteger and NSUInteger are defined below.
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Your code, as written should have given you a compiler warning.
A good rule of thumb: Find an fix every single compiler warning, no exceptions. They always indicate that's there is something wrong. Sometimes that something causes wrong results rather than crashes, but there is always a reason for compiler warnings.
One of the worst thing about warnings is that they are only displayed when you edit a source file. If you ignore them, the compiler treats the file as compiled successfully, and so you don't see the warning again unless you edit the file, or if you run a "clean" on your project. This means that if you don't fix a warning, it becomes invisible. Bad. Very bad.
Sometimes the fix is a type cast so the compiler knows the type of an object. Other times, like above, the answer is to get rid of a spurious asterisk.
Another example of code that should be changed to get rid of a warning:
We've all written the if statement
if (a == b)
(which means "if a equals b")
as
if (a = b)
instead, by accident. This second statement is technically legal, but almost always wrong. What it means is: Copy the right side of the = statement into the left side (set a to the value of b.) Then evaluate the result of the right side as the result of the if statement. So the if statement will be true if b is not equal to zero.
The LLVM compiler flags if (a = b) with a warning because this is such a common mistake. If you really, really want to do that, you can add an extra set of parenthesis:
if ((a = 0))
That tells the compiler that you really mean to do this. (I say anyone doing that should be shot, but that's a different issue...)
This is an example of modifying your code to make your intentions clear, both to the compiler, and to other people reading your code.
Related
My app have to use int to do some multiplication, it is easy to meet two fairly big numbers' multiplication.
Of course it will crash. And how can I remark some bool value. Just like every time before we'll quit the app, we saveData in the AppDelegate.swift's function:
func applicationWillTerminate(application: UIApplication) {
// Called when the application is about to terminate. Save data if appropriate. See also applicationDidEnterBackground:.
}
If the result of an integer arithmetic operation (+, -, *, /, ...)
overflows, the application terminates immediately. There is no way to
catch this situation or to get notified e.g. to save data.
There is no Swift error or NSException thrown which you could catch.
The same would happen for other runtime errors like accessing
an array element outside of the valid bounds, or unwrapping an
optional which is nil.
This means that you have to check beforehand if the integer arithmetic
operation can be executed. Alternatively – depending on your needs –
you can
use the "Overflow operators" &+, &- and &* instead,
which truncate the result instead of triggering an error,
similar as in (Objective-)C.
use addingReportingOverflow() and similar methods which “return the sum of this value and the given value, along with a Boolean value indicating whether overflow occurred in the operation.”
You should prefer NSInteger over Int.
Use NSInteger when you don't know what kind of processor architecture your code might run on, so you may for some reason want the largest possible int type, which on 32 bit systems is just an int, while on a 64-bit system it's a long.
stick with using NSInteger instead of int/long unless you specifically require them.
NSInteger/NSUInteger are defined as *dynamic typedef *s to one of these types, and they are defined like this:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Consider below struct:
typedef struct _Index {
NSInteger category;
NSInteger item;
} Index;
If I use this struct as a property:
#property (nonatomic, assign) Index aIndex;
When I access it without any initialization right after a view controller alloc init, LLDB print it as:
(lldb) po vc.aIndex
(category = 0, item = 0)
(lldb) po &_aIndex
0x000000014e2bcf70
I am a little confused, the struct already has valid memory address, even before I want to allocate one. Does Objective-C initialize struct automatically? If it is a NSObject, I have to do alloc init to get a valid object, but for C struct, I get a valid struct even before I tried to initialize it.
Could somebody explains, and is it ok like this, not manually initializing it?
To answer the subquestion, why you cannot assign to a structure component returned from a getter:
(As a motivation this is, because I have read this Q several times.)
A. This has nothing to do with Cbjective-C. It is a behavior stated in the C standard. You can check it for simple C code:
NSMakeSize( 1.0, 2.0 ).width = 3.0; // Error
B. No, it is not an improvement of the compiler. If it would be so, a warning would be the result, not an error. A compiler developer does not have the liberty to decide what an error is. (There are some cases, in which they have the liberty, but this are explicitly mentioned.)
C. The reason for this error is quite easy:
An assignment to the expression
NSMakeSize( 1.0, 2.0 ).width
would be legal, if that expression is a l-value. A . operator's result is an l-value, if the structure is an l-value:
A postfix expression followed by the . operator and an identifier designates a member of a structure or union object. The value is that of the named member,82) and is an lvalue if the first expression is an lvalue.
ISO/IEC 9899:TC3, 6.5.2.3
Therefore it would be assignable, if the expression
NSMakeSize( 1.0, 2.0 )
is an l-value. It is not. The reason is a little bit more complex. To understand that you have to know the links between ., -> and &:
In contrast to ., -> always is an l-value.
A postfix expression followed by the -> operator and an identifier designates a member of a structure or union object. The value is that of the named member of the object to which the first expression points, and is an lvalue. 83)
Therefore - that is what footnote 83 explains – ->, &, and . has a link:
If you can calculate the address of a structure S having a component C with the & operator, the expression (&S)->C is equivalent to S.C. This requires that you can calculate the address of S. But you can never do that with a return value, even it is a simple integer …
int f(void)
{
return 1;
}
f()=5; // Error
… or a pointer …
int *f(void)
{
return NULL;
}
f()=NULL; // Error
You always get the same error: It is not assignable. Because it is a r-value. This is obvious, because it is not clear,
a) whether the way the compiler returns a value, esp. whether he does it in address space.
b) when the time the life time of the returned value is over
Going back to the structure that means that the return value is a r-value. Therefore the result of the . operator on that is a r-value. You are not allowed to assign a value to a r-value.
D. The solution
There is a solution to assign to a "returned structure". One might decide, whether it is good or not. Since -> always is an l-value, you can return a pointer to the structure. Dereferencing this pointer with the -> operator has always an l-value as result, so you can assign a value to it:
// obj.aIndex returns a pointer
obj.aIndex->category = 1;
You do not need #public for that. (What really is a bad idea.)
The semantics of the property are to copy the struct, so it doesn't need to be allocated and initialized like an Objective-C object would. It's given its own space like a primitive type is.
You will need to be careful updating it, as this won't work:
obj.aIndex.category = 1;
Instead you will need to do this:
Index index = obj.aIndex;
index.category = 1;
obj.aIndex = index;
This is because the property getter will return a copy of the struct and not a reference to it (the first snippet is like the second snippet, without the last line that assigns the copy back to the object).
So you might be better off making it a first class object, depending on how it will be used.
In my program I am holding the pixel data of a UIImage in a basic C array:
// It can be UInt8 as it is greyscaled, values can only be 0-255
#property (nonatomic, readwrite) UInt8 *imageData;
Now in my greyscaling algorithm I initialize it:
self.imageData = malloc(sizeof(UInt8) * width * height);
then I fill in the data as I greyscale. This works. However, in a later algorithm I go through and make values either 0 or 255 through some magic. I thought that was working fine... but a later algorithm looking for these values was getting 1's and 3's. So I went and threw this in right after I change the value from 0-255 to be either 0 or 255:
// Magic loop. Starts with current value at pixelIndex, gets curVal.
self.imageData[pixelIndex] = curVal;
NSAssert(self.imageData[pixelIndex] != curVal, #"Whattt");
And lo and behold it isn't actually changing the value of my array. Is there something I'm missing? Are these arrays a one time set and then they become readonly? I know NSArrays are like that, but this is just a simple C pointer-based array... or are they the same?
Whatever the reason, how can I make it so I can properly alter these values?
This should work fine. I think your problem is just that you have your assertion backwards — the condition is supposed to be an expression that you want to be true, so currently you are asserting "After I set self.imageData[pixelIndex] to be curVal, it should not be equal to curVal." Instead, that should be:
NSAssert(self.imageData[pixelIndex] == curVal, #"Whattt");
BTW, when doing this sort of assertion, I always think it's helpful to include the expected and actual values so you can see what's going on. In this case, you could do:
NSAssert3(self.imageData[pixelIndex] == curVal, #"Image data at pixel index %d should have been %d, but instead was %d", pixelIndex, curVal, self.imageData[pixelIndex]);
This would make it more obvious that the values actually are equal and something is going wrong in your equality check.
I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros
I have been told that BOOL in Objective-C is a typedef of an unsigned char and YES & NO keywords are encoded chars. This is not the first time I heard it. I have read that this is because Apple used BOOL before the C standard provided a _Bool type, am I wrong? Is there any advantage of that fact? Are we wasting bits of memory? Does this provide a way to return valuable data in a function? Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
BOOL myFunction(int argument)
{
BOOL result = YES; //The function generates the result
if (someError == YES) {
return 5;
}
return result;
}
Are we wasting bits of memory?
No, because you can't get a variable smaller than a char: it's always a single byte. You can pack multiple bits representing boolean flags in a single word, but you have to do it manually - with bit shifts, using bit fields, and so on.
Does this provide a way to return valuable data in a function?
Not really: what you did is a hack, although 5 would definitely make its way through the system to the caller, and would be interpreted as YES in a "plain" if statement, e.g.
if (myFunction(123)) {
...
}
However, it would fail miserably if used like this:
if (myFunction(123) == YES) { // 5 != YES
...
}
Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
It would always be incorrect from the readability point of view; as far as "doing what you meant it to do", your mileage may vary, depending on the way in which your function is used.
There is a slight advantage: On many platforms (including iOS, IIRC), sizeof(_Bool) == sizeof(int), so using char can be slightly more compact.
Except BOOL is actually signed char, not char, this is so #encode(BOOL) evaluates to the same thing on all platforms. This complicates bitfields slightly, since BOOL foo:1; appears to define a 1-bit signed integer (IIRC the behaviour of which is undefined) — clearly unsigned char would be a better choice, but it was probably too late.
_Bool also ought to optimize better since the compiler can make assumptions about the bit-pattern used, e.g. replacing a&&b with a&b (provided b is side effect-free). Some architectures also represent "true" as all bits set, which is useful for masking (SSE comparison instructions come to mind).
"BOOL in Objective-C" is not an unsigned char, it's whatever the Objective-C library defines it to be. Which is unsigned char or bool, depending on your compiler settings (32 bit or 64 bit). Both behave different. Try this code with a 32 bit compiler and a 64 bit compiler:
BOOL b = 256;
if (b) NSLog (#"b is true"); else NSLog (#"b is false");