Can't change a value in my array? - ios

In my program I am holding the pixel data of a UIImage in a basic C array:
// It can be UInt8 as it is greyscaled, values can only be 0-255
#property (nonatomic, readwrite) UInt8 *imageData;
Now in my greyscaling algorithm I initialize it:
self.imageData = malloc(sizeof(UInt8) * width * height);
then I fill in the data as I greyscale. This works. However, in a later algorithm I go through and make values either 0 or 255 through some magic. I thought that was working fine... but a later algorithm looking for these values was getting 1's and 3's. So I went and threw this in right after I change the value from 0-255 to be either 0 or 255:
// Magic loop. Starts with current value at pixelIndex, gets curVal.
self.imageData[pixelIndex] = curVal;
NSAssert(self.imageData[pixelIndex] != curVal, #"Whattt");
And lo and behold it isn't actually changing the value of my array. Is there something I'm missing? Are these arrays a one time set and then they become readonly? I know NSArrays are like that, but this is just a simple C pointer-based array... or are they the same?
Whatever the reason, how can I make it so I can properly alter these values?

This should work fine. I think your problem is just that you have your assertion backwards — the condition is supposed to be an expression that you want to be true, so currently you are asserting "After I set self.imageData[pixelIndex] to be curVal, it should not be equal to curVal." Instead, that should be:
NSAssert(self.imageData[pixelIndex] == curVal, #"Whattt");
BTW, when doing this sort of assertion, I always think it's helpful to include the expected and actual values so you can see what's going on. In this case, you could do:
NSAssert3(self.imageData[pixelIndex] == curVal, #"Image data at pixel index %d should have been %d, but instead was %d", pixelIndex, curVal, self.imageData[pixelIndex]);
This would make it more obvious that the values actually are equal and something is going wrong in your equality check.

Related

Metastock to MQL4 conversion

I would like to know how I can use the Ref() function of metastock in MQL4.
Ref() in metastock is used to take the previous values of the given data array.
for example:
Ref(C,-1)
gives the value of previous day's close.
iClose(_Symbol,PERIOD_D1,1) for close. In Mql4 0 means the current candle, and increases to the left, so -1 in your case becomes 1; this is true when accessing candle data. For regular arrays, e.g., obtained by CopyBuffer or looping, array indexes are 0 to ArraySize()-1

Can this Int32 initializer ever return nil?

Objective C:
NSInteger x = // some value...
NSString* str = [NSString stringWithFormat:#"%d", (int)x];
// str is passed to swift
Swift:
let string:String = str!
let x = Int32(string)! // crash!
Sorry for the disjointed code, this is from a crash reported in a large existing codebase. I don't see how it's possible for the int->string->int32 conversion to fail. NSInteger can be too big for int32, but I would expect the explicit (int) to prevent that case (it will give the wrong value, but still shouldn't crash).
I have been unable to reproduce this, so I'm trying to figure out if my understanding is completely wrong.
Edit: obviously it is theoretically possible for it to return nil in the sense that the spec says so. I'm asking if/how it can in this specific situation.
Since you are using Int32, the initializer can return nil if the value supplied to it is out of the range Int32 can take. In your specific case this can easily happen, since as the documentation of NSInteger states, it can take 64bit values in 64bit applications (which is the only supported configuration since iOS11).
The documentation of Int32.init(_:String) clearly states that the cases when the failable initializer can fail:
If description is in an invalid format, or if the value it denotes in
base 10 is not representable, the result is nil. For example, the
following conversions result in nil:
Int(" 100") // Includes whitespace
Int("21-50") // Invalid format
Int("ff6600") // Characters out of bounds
Int("10000000000000000000000000") // Out of range

Why, in Swift, when I convert from a Double to an Int is it subtracting 1?

I have some very simple code that does a calculation and converts the resulting double to an int.
let startingAge = (Double(babyAge/2).rounded().nextDown)
print(startingAge)
for each in 0..<allQuestions.count {
if allQuestions[each] == "\(Int(startingAge))"
The first print of startingAge gives me the correct answer, for example 5.0. But when it converts to an Int, it gives me an answer of 4. When the Double is 6.0, the int is 5.
I'm feeling stupid, but can't figure out what I'm doing wrong.
When you call rounded(), you round your value to the nearest integer.
When you call .nextDown, you get the next possible value less than the existing value, which means you now have the highest value that's less than the nearest integer to your original value. This still displays as the integer when you print it, but that's just rounding; it's really slightly less than the integer. So if it's printing as "4.0", it's really something like 3.9999999999999 or some such.
When you convert the value to an Int, it keeps the integer part and discards the part to the right of the decimal. Since the floating-point value is slightly less than the integer you rounded to thanks to .nextDown, the integer part is going to be one less than that integer.
Solution: Get rid of the .nextDown.
When you cast you lose precession.
In your case the line returns a double: Assume baby age is 9 then startingAge is 3.999999
let startingAge = (Double(babyAge/2).rounded().nextDown)
and when you print it your answer becomes 3
print("\(Int(startingAge))")
To fix this use this line instead:
let startingAge = (Double(babyAge/2).rounded().nextDown).rounded()
This is what nextdown does, it does not round values, and if the number is
a floating point number it becomes slightly less. If the number was to be an int it would become 1 less I presume.

2048 casted to BOOL returns 0

Consider this code
NSInteger q = 2048;
BOOL boolQ = q;
NSLog(#"%hhd",boolQ);
After execution boolQ is equal 0. Could someone explain why is this so?
BOOL probably is implemented as char or uint8_t/int8_t, as "hh" prints half of the half of an integer. which typically is a byte.
Converting to char is taking the lowest 8bit of 2048 (=0x800) and gives you 0.
The proper way to convert any integer to a boolean value is:
NSInteger q = some-value;
BOOL b = !!q;
Casting an integer value to a type too small to represent the value being converted is undefined behaviour in C (C11 standard Annex J.2), and therefore also in the part of Objective-C which deals with C-level matters. Since it's undefined behaviour it can represent the result however it wants, expected value or not.
As per 6.3.1.4, any integer can be used as a boolean value without casting, in which case it will show the expected behaviour (0 is 0, everything else is 1), giving rise to the !! idiom suggested by alk; perhaps counterintuitively, you convert the value by not explicitly converting the value (instead, the conversion is correctly handled by the implicit conversion operation inserted by the ! operator).

Is it better to write 0.0, 0.0f or .0f instead of simple 0 for supposed float or double values

Hello well all is in the title. The question apply especially for all those values that can be like NSTimeInterval, CGFloat or any other variable that is a float or a double. Thanks.
EDIT: I'm asking for value assignment not format in a string.
EDIT 2: The question is really does assigning a plain 0 for a float or a double is worst than anything with f a the end.
The basic difference is as :
1.0 or 1. is a double constant
1.0f is a float constant
Without a suffix, a literal with a decimal in it (123.0) will be treated as a double-precision floating-point number. If you assign or pass that to a single-precision variable or parameter, the compiler will (should) issue a warning. Appending f tells the compiler you want the literal to be treated as a single-precision floating-point number.
If you are initializing a variable then it make no sense. compiler does all the cast for you.
float a = 0; //Cast int 0 to float 0.0
float b = 0.0; //Cast 0.0 double to float 0.0 as by default floating point constants are double
float c = 0.0f // Assigning float to float. .0f is same as 0.0f
But if you are using these in an expression then that make a lot of sense.
6/5 becomes 1
6/5.0 becomes 1.2 (double value)
6/5.0f becomes 1.2 (float value)
If you want to dig out if there is any difference to the target CPU running the code or the binary code it executes, you can easily copy one of the command lines compiling the code from XCode to command line, fix missing environment variables and add a -S. By that you would get assembly output, that you can use to compare. If you put all 4 variants in a small example source file, you can compare the resulting assembly code afterwards, even without being fluent in ARM assembly.
From my ARM assembly experience (okay... 6 years ago and GCC) I would bet 1ct on something like XORing a register with itself to flush it's content to 0.
Whether you use 0.0, .0, or 0.0f or even 0f does not make much of a difference. (There are some with respect to double and float) You may even use (float) 0.
But there is a significant difference between 0 and some float notation. Zero will always be some type of integer. And that can force the machine to perform integer operations when you may want float operations instead.
I do not have a good example for zero handy but I've got one for float/int in general, which nealy drove me crazy the other day.
I am used to 8-Bit-RGB colors That is because of my hobby as photographer and because of my recent background as html developer. So I felt it difficult to get used to the cocoa style 0..1 fractions of red, green and yellow. To overcome that I wanted to use the values that I was used to and devide them by 255.
[CGColor colorWithRed: 128/255 green: 128/255 andYellow: 128/255];
That should generate me some nice middle gray. But it did not. All that I tried either made a black or white.
First I thought that this was caused by some undocumented dificiency of the UI text objects with which I was using this colour. It took a while to realize that this constant values forced integer operations wich can only round up or down to 0 and 1.
This expession eventually did what I wanted to achieve:
[CGColor colorWithRed: 128.0/255.0 green: 128.0/255.0 andYellow: 128.0/255.0];
You could achieve the same thing with less .0s attached. But it does not hurt having more of them as needed. 128.0f/(float)255 would do either.
Edit to respond to your "Edit2":
float fvar;
fvar = 0;
vs ...
fvar = .0;
In the end it does not make a difference at all. fvar will contain a float value close to (but not always equal to) 0.0. For compilers in the 60th and 70th I would have guessed that there is a minor performance issue associated with fvar = 0. That is that the compiler creates an int 0 first which will then have to be converted to float before the assignment. Modern compilers of today should optimize automatically much better than older ones. In the end I'd have to look at the machine code output to see whether it does make a difference.
However, with fvar = .0; you are always on the safe site.

Resources